SmartHomeSynology Change password of OpenHab Console on Synology

To change the OpenHab Console password, you have to edit the /userdata/etc/users.properties file.

Click to Read More

First, open a SSH console on your Synology as root (See here).

Then, create a hashed password with the following command (replace ThisIsMyNewPassword with yours) :

echo -n ThisIsMyNewPassword | sha256sum

It should output someting like this :

8fda687cf4127db96321c86907cbea99dabb0b13aa4bf7555655e1df45e41938 -

If you installed openHab as explained here, the file to be edited is /openHAB/userdata/etc/users.properties in the share /SmartHome of your Synology. Copy the hashed string above (without the dash and the blank) between the {CRYPT} tags:

# This file contains the users, groups, and roles.
# Each line has to be of the format:
#
# USER=PASSWORD,ROLE1,ROLE2,...
# USER=PASSWORD,_g_:GROUP,...
# _g_\:GROUP=ROLE1,ROLE2,...
#
# All users, groups, and roles entered in this file are available after Karaf startup
# and modifiable via the JAAS command group. These users reside in a JAAS domain
# with the name "karaf".
#
openhab = {CRYPT}8fda687cf4127db96321c86907cbea99dabb0b13aa4bf7555655e1df45e41938{CRYPT},_g_:admingroup
_g_\:admingroup = group,admin,manager,viewer,systembundles

To test the new password, open a SSH console on openHab. As by default it may only be accessed from the localhost, the best option is to use GateOne (See here). Once logged in GateOne on your Synology, execute :

ssh -p 8101 openhab@localhost

You should be prompted to enter your password and, if correct, you will see:

Type Ctrl-D to exit the openHab console.

 

NB.: instead of logging in GateOne as admin, you can directly connect on openHab using the port ‘8101’ and the login ‘openhab’ in GateOne:

SmartHomeSynology Backup & Restore openHab 2.x on Synology

In order to upgrade from openHab 2.4 to 2.5, I had to backup the configuration of openHab, uninstall the v2.4, install the v2.5 and restore the configuration.

Click to Read More

If you installed OpenHab as explained here, you can copy all the folders under /openHAB in the share /SmartHome of your Synology.

OpenHAB 2.x currently has two different ways of setting up things:

  • Either through textual configuration (in /SmartHome/openHAB/conf folder) or
  • through the user interface which saves to a “jsonDB” database (in /SmartHome/openHAB/userdata folder).

Both the textual configuration files and the database folders must be backuped (See here).

OpenHab 2.x comes now with scripts to backup and restore its configuration and database. They are availabe in the folder /runtime/bin. You can access them via a SSH Console on your Synology, under /var/packages/openHAB/target/runtime/bin/ (equivalent to /volume1/@appstore/openHAB/runtime/bin)

These scripts take care of backuping not only the files that you have manually edited in the folder /conf (items, things, scripts, …), but also everything configured via the paperUI or HABPanel and stored in the folder /userdata (habmin, jsondb,…)

Attention, these scripts do not take care of:

  • backuping the jar files that you have installed manually. Ex.: in /addons
  • backuping the DB you would be using for, e.g., persistence, …
  • adding the openHAB user (‘openhab’) to the dialout and tty groups if you did this previously

First, prepare your Synology

  1. Open a SSH console on your Synology as root (See here)
  2. Install the Synology Gear’s tools, required to have the command pgrep used by the restore script of openHab, typing the command :
    synogear install
  3. Modify the script ‘/runtime/bin/restore’ to replace unzip (not available anymore on Synology) by 7zip. Concretelly, replace:

command -v unzip >/dev/null 2>&1 || {
echo "'unzip' program was not found, please install it first." >&2
exit 1
}

with

command -v 7z >/dev/null 2>&1 || {
echo "'7z' program was not found, please install it first." >&2
exit 1
}

and 

unzip -oq "$InputFile" -d "$TempDir" || {
echo "Unable to unzip $InputFile, Aborting..." >&2
exit 1
}

with

7z x -y -o"$TempDir" "$InputFile" > /dev/null || {
echo "Unable to unzip $InputFile, Aborting..." >&2
exit 1
}

Next, use the following commands to backup your configurations:

  1. sudo -i
  2. cd /var/packages/openHAB/target
  3. synoservice –stop pkgctl-openHAB
  4. ./runtime/bin/backup
  5. synoservice –start pkgctl-openHAB

You should see something like this as output:

#########################################
openHAB 2.x.x backup script
#########################################

Using '/volume1/@appstore/openHAB/conf' as conf folder...
Using '/volume1/@appstore/openHAB/userdata' as userdata folder...
Using '/volume1/@appstore/openHAB/runtime' as runtime folder...
Using '/volume1/@appstore/openHAB/backups' as backup folder...
Writing to '/volume1/@appstore/openHAB/backups/openhab2-backup-19_12_25-12_27_33.zip'...
Making Temporary Directory if it is not already there
Using /tmp/openhab2/backup as TempDir
Copying configuration to temporary folder...
Removing unnecessary files...
Zipping folder...
Removing temporary files...
Success! Backup made in /volume1/@appstore/openHAB/backups/openhab2-backup-19_12_25-12_27_33.zip

Before uninstalling openHab, if you intend to install a new version, copy the backup into a safe folder, like the tmp folder :

cp /volume1/@appstore/openHAB/backups/openhab2-backup-19_12_25-12_27_33.zip /tmp/openhab2-backup.zip

Finally, use the following commands to restore your configurations:

  1. sudo -i
  2. cd /var/packages/openHAB/target
  3. synoservice –stop pkgctl-openHAB
  4. ./runtime/bin/restore /tmp/openhab2-backup.zip
  5. synoservice –start pkgctl-openHAB

You should see an output like this:

##########################################
openHAB 2.x.x restore script
##########################################

Using '/volume1/@appstore/openHAB/conf' as conf folder...
Using '/volume1/@appstore/openHAB/userdata' as userdata folder...
Making Temporary Directory
Extracting zip file to temporary folder.

Backup Information:
-------------------
Backup Version | 2.5.0 (You are on 2.4.0)
Backup Timestamp | 19_12_25-12_27_33
Config belongs to user | openhab
from group | users

Your current configuration will become owned by openhab:users.

Any existing files with the same name will be replaced.
Any file without a replacement will be deleted.

Okay to Continue? [y/N]: y
Moving system files in userdata to temporary folder
Deleting old userdata folder...
Restoring system files in userdata...
Deleting old conf folder...
Restoring openHAB with backup configuration...
Deleting temporary files...
Backup successfully restored!

 

If opening openHab weg page immediatly, you will see that it’s restoring the UI:

Please stand by while UIs are being installed. This can take several minutes.

Once done, you will have access to your PaperUI, BasicUI, HabPanel, etc…

Synology Web Consoles to execute bash commands on Synology

I am using two different Web Consoles to execute commands on my Synology : the Web Console of Nickolay Kovalev and GateOne.

Click to Read More

Such Web Consoles are a bit easier to launch than a ssh console via Putty (See here). They can be opened directly from the DSM of your Synology. Another advantage is that the Web Console is still valid (opened) when the PC goes to out of sleep/hibernation state.

To use the Web Console of Nickolay Kovalev, install my Synology Package “MODS_Web_Console” available on my SSPKS server or on my GitHub

It is very convenient to execute basic commands. But you can’t use it to run vi, ssh, and otehr commands which interact with the display, the keyboard, etc…

To use the more advanced Web Console GateOne, install my Synology Package “MODS_GateOne” available on my SSPKS server or on my GitHub

It is really powerful and secure. You can use it to open multiple ssh sessions, edit files with vi, etc…

Synology Basic Authentication in subfolders with nginx on Synology

I am using nginx as default webserver on my Synology. Here is how to configure a login/password prompt on a subfolder of the WebStation.

Click to Read More

First, open a SSH console on your Synology and enter the root (See here)

Notice that you cannot change the config file of nginx (/etc/nginx/nginx.conf). Even if you do so, your changes will be removed automatically. But in the config file of nginx, for the server listening on the port of your WebStation (port 80 by default), you can see it is loading extra config named www.*.conf under the folder /usr/syno/share/nginx/conf.d/

include /usr/syno/share/nginx/conf.d/www.*.conf;

So, go to that folder : cd /usr/syno/share/nginx/conf.d

There, create a password file with your login and password

Type : htpasswd -c -b .htpasswd YourLogin YourPassword

the parameter -c is to create the file. Only use it if the file does not yet exist otherwise you will clean its current content!!

Look at the content if the file: cat .htpasswd

It should be similar to this: 

YourLogin:$apr3$hUZ87.Mo$WUHtZHjtPWbBCD4jezDh72

Now, create and edit a file name like : www.protected.conf (Ex.: using vi)

Assuming that you want to protect a subfolder You/Sub/Folder existing in the root of your WebStation, you should theoretically this into www.protected.conf :

location /Your/Sub/Folder {
auth_basic “Any Prompt Message You Want”;
auth_basic_user_file /usr/syno/share/nginx/conf.d/.htpasswd;
}

Next, you have to restart nginx with the command : nginx -s reload

Or even better to be 100% sure: synoservicecfg –restart nginx

 

But for some reason, this is not working on Synology (at least on mine ?!). It seems that the block location does not work without a modifier like = or ~

It searched for hours why it was not working, without answer. 

For example, I tested first that the following webpage was working fine on my NAS: http://MyNas/SubFolder/index.php

Next, I edited the www.protected.conf with the following block before restarting nginx, deleting the cache ofh my browser and restarting my browser (THIS IS MANDATORY each time one changes the location):

location /SubFolder/index.php {
return 301 https://www.google.be/search;
}

But opening http://MyNas/SubFolder/index.php didn’t return me to Google.

Next, I tried with :

location = /SubFolder/index.php {
return 301 https://www.google.be/search;
}

And this was working! So, I thought that the path used as location was possibly was incorrect. To see the path capture as location I tried next with 

location ~ (/SubFolder/index.php) {
return 301 http://Fake$1;
}

Opening now http://MyNas/SubFolder/index.php, I got the (unvailable) page  http://Fake/SubFolder/index.php

So, the path /SubFolder/index.php was definitively correct. 

I think that my directive is included before another one which overwrite it. Possibly this one, found in /etc/nginx.nginx.conf:

location / {
rewrite ^ / redirect;
}

So, I have no choice but use the modifier = (exact match) or ~ (matching a regular expression). Unfortunately, doing so, another problem arise… the php pages are not reachable anymore :(

If you look at the log file of nginx: cat /var/log/nginx/error.log

You see:

2019/11/30 21:22:12 [error] 25657#25657: *50 open() “/etc/nginx/html/SubFolder/index.php” failed (2: No such file or directory), client: 192.168.0.1, server: _, request: “GET /SubFolder/index.php HTTP/1.1”, host: “MyNas”

This is because nginx is using its default root folder /etc/nginx/html/ instead of inheriting the one define for the WebStation.

The solution is to simply specify the root in the location block : root /var/services/web;

But now, instead of being executed, the php script is downloaded.. Harg !

The following location works, by redefining the execution of the php page:

location = /SubFolder/index.php {
  root /var/services/web;

  try_files $uri /index.php =404;
  fastcgi_pass unix:/var/run/php-fpm/php73-fpm.sock;
  fastcgi_index index.php;
  include fastcgi.conf;
}

Pay attention, corrupting the config of nginx will make you DSM unable to run anymore ! Always check that your config is correct with the command : nginx -t

 

Ok, now, to handle both folders and php pages, one can use this variant of the location above:

location ~ /SubFolder/ {
  root /var/services/web;

  location ~ \.php$ {
    try_files $uri /index.php =404;
    fastcgi_pass unix:/var/run/php-fpm/php73-fpm.sock;
    fastcgi_index index.php;
    include fastcgi.conf;
  }
}

In the sub-location above, use php73-fpm.sock, php70-fpm.sock, php50-fpm.sock, etc… according to the version of php used by default with your nginx in your WebStation !

This is more or less working fine… I still have issues as some server variables are not passed to the php pages… But it’s working enough for my purpose. We are now only missing the basic authentication !!! Here is the final location block:

location ~ /You/Sub/Folder/ {
  root /var/services/web;

  location ~ \.php$ {
    try_files $uri /index.php =404;
    fastcgi_pass unix:/var/run/php-fpm/php73-fpm.sock;
    fastcgi_index index.php;
    include fastcgi.conf;
  }

  auth_basic “Any Prompt Message You Want”;
  auth_basic_user_file /usr/syno/share/nginx/conf.d/.htpasswd;

}

Once in place, nginx restarted, your browser cleaned and restarted too, you finally got the prompt for your login and password.

If you type a wrong login or password, you could see the error in the nginx log file: cat /var/log/nginx/error.log

2019/11/30 17:51:47 [error] 12258#12258: *145 user “mystery” was not found in “/usr/syno/share/nginx/conf.d/.htpasswd”, client: 192.168.0.1, server: _, request: “GET /err/You/Sub/Folder/ HTTP/1.1”, host: “MyNas”

2019/11/30 17:59:52 [error] 20130#20130: *3 user “mystery”: password mismatch, client: 192.168.0.1, server: _, request: “GET /You/Sub/Folder/ HTTP/1.1”, host: “MyNas”

Et voilà… Not perfect, not clear why it’s not wokring out of the box as documented here… But 

SmartHomeSynology Install OpenHAB 2 on Synology

I was looking for one local and single platform to control all my connected devices : Philips Hue, Somfy blinds, Fibaro Wall Plugs, IFTTT scenario, … OpenHab 2 can do that once installed on a Synology!

Click to Read More

To install OpenHab 2 on Synology, I did use the official doc and this great tutorial from Thomas Schwarz.

Prerequisites

First, install Java on your Synology. It is available as a package in the Package Center :

Once installed, upgrade it to the latest version, to be downloaded directly from the Oracle web site :

Next, “Enable user home service” via the Menu > Control Panel > User :

Then, create a Shared Folder “SmartHome” via the Menu > Control Panel > Shared Folder (pay attention to the case!) :

And finally, via the Menu > File Station, create a subfolder “openHAB” in the Shared Folder SmartHome (pay attention to the case!):

Create next the 3 following sufolders under ‘openHAB’ : ‘addons’, ‘conf’ and ‘userdata’. If you don’t create those subfolders, they will be created in ‘/var/packages/openHAB/target’ and you won’t be able to access them via the Shared Folder SmartHome. Hence, you won’t be able to edit the configuration files easily later…

Installation

Download now the package “OpenHab 2” from its GitHub Repository :

And install it manually via the Package Center :Use the subfolder “openHAB” created on the Shared folder “SmartHome :

I did install the Z-Wave module as I have a Z-Wave key installed on my Synology :

Once installed, check that you see the following content in the folder “openHAB” of the Shared Folder “SmartHome” :You should also have the following content in the folder “openHab” of the Shared Folder “homes” :Finally, check that openHab is running fine and finalize the setup by opening the Package “openHAB” via the Package Center > Installed > OpenHAB. There, click on the “Url” at the bottom of the screen :

A new tab should open with a page where you can select a pre-defined configuration. I am using the “Standard” one :

Et voilà :

You can now proceed further with the configuration, as explained here.

Synology Customize Mac & Serial of Xpenology images to run Synology in VMWare

Here are:

  • all my Xpenology packages used to emulate a Synology with VMWare and
  • how to customize their Mac Address as well as their Serial Number

Click to Read More

The DSM images available in the packages above come from these Synology’s archives.

You can find here or here how to create a Virtual Synology with theses packages (Notice I have fine-tuned those, among others to put the disks in the right boot order for the DS3617xs).

 

Before importing a package in VMWare, you can update its Mac Address and its Serial to make them unique in your network (E.g.: if you duplicates the images). For that purpose, you need OSFMount, a tool able to update (read and write) the content of .img files.

Run OSFMount and open the disk image file “synoboot.img” (Do not mount it as Ram Drive):

“Select All” the partitions as virtual  disks and uncheck the flag “Read-only drive”:

Once opened, double-click on the first drive in the list to open it. It contains the settings to be customized in the file drive:\grub\grub.cfg. Edit that file to change the Serial (set sn) and the MAC address (set mac1):

A new MAC address can be generated using VMWare; via a Network Adapter > Advanced > Generate. Click on the button Generate a few times and copy the value in the file above (removing the various semicolons).

A new Serial can be generated on this site.

Once done with the edition, “Dismount All” the drives in OSFMount. You can now import the virtual machines in VMWare.

Et voilà.

Synology Use a Bridged Network for a Virtual Synology using VMWare

Within the Virtual “Synologies” created as described here and here, I was using NAT for the Network Connection. To use a Bridged Network Connection is not easy, but can work.

Click to Read More

I wanted to reconfigure all my Virtual Synology to use NAT instead of a Bridged Network Connection.

But once this is done, the Virtual Synology does not appear anymore as configured in the Synology Assistant (which opens the Network Wizard). And trying to reach it via a browser, on its admin port,  results in connection timeout.

If I wait for several minutes (sometimes more than 10′) and try again and again to reach my various Virtual Synology on its admin port, I finally get them.

I don’t know yet why this is not immediate ?!… I seems to be an issue with the Bridged Connection of VMWare under Windows 10.

 

I tried to clean the arp table (Run as Command Prompt as Administrator on Windows and type: arp -d *). But without success. And the problem comes back not only each time the VM is restarted, but also sometimes while running since a while…

I did check that the Mac Address of each Synology (displayed by the Synology Assistant) was correctly defined in VMWare:

See here how to customize the MAC Address of a Synology image.

 

I also checked that the Bridged Connections were correctly configured in VMWare as suggested here:

  1. Be sure your vm is stopped.
  2. Run the VMWare Virtual Network Editor (click Start and search for Virtual Network Editor)
  3. Run it as administrator (or click the button at the bottom of the screen that says, “change settings.” VMNet0 will dislpay when running as administrator. Otherwise, it will not be visible)
  4. Highlight VMNet0 and click on “Automatic Settings”
  5. You will see a list of adapters. De-select all but the physical network card. (When I set up up with player, I had selected only the 1. After install of workstation, all of the items were checked.)
  6. Click “OK”
  7. Click “Ok”
  8. Start the VM and test.

 

I tried next various tips from here and here, such as stopping and restarting the vmnetbridge. The best results are achieved by deleting all the virtual adapters in the VMWare Virtual Network Editor, creating a new one bridged to a specific Ethernet Adapter and finally using that one as a “Custom: Specific virtual network” as Network Adapter for each VM.

 

But I still have randomly some VM with a “Connection Failed” status in the Synology Assistant. If I found how to definitively fix this issue, I will post it here.

Synology Synology’s Scheduled Tasks

I would like to find how to create Scheduled Tasks to execute a User-Defined Scripts on Synology using commands in a shell script. But I don’t find how-to. Here are the only info I was able to get.

Click to Read More

The tasks created via Control Panel > Task Scheduler > Create > Scheduled Task > User-defined Script, are stored in the file /etc/crontab. Ex.:

The tasks id are stored in /usr/syno/etc/scheduled_tasks. Ex.:

[1]
id=1
last work hour=23
can edit owner=1
can delete from ui=1
edit dialog=SYNO.SDS.TaskScheduler.EditDialog
type=daily
action=#common:run#: /usr/local/bin/php73 /var/packages/MODS_ServerMonitor/target/ui/cron/status.cron.php
can edit from ui=1
week=1111111
app name=#common:command_line#
name=Update Server Mon
can run app same time=1
owner=0
repeat min store config=[1,5,10,15,20,30]repeat hour store config=[1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23]simple edit form=1
repeat hour=0
listable=1
app args={“notify_enable”:false,”notify_if_error”:false,”notify_mail”:””,”script”:”/usr/local/bin/php73 /var/packages/MODS_ServerMonitor/target/ui/cron/status.cron.php”}
state=enabled
can run task same time=0
start day=0
cmd=L3Vzci9sb2NhbC9iaW4vcGhwNzMgL3Zhci9wYWNrYWdlcy9NT0RTX1NlcnZlck1vbml0b3IvdGFyZ2V0L3VpL2Nyb24vc3RhdHVzLmNyb24ucGhw
run hour=0
edit form=SYNO.SDS.TaskScheduler.Script.FormPanel
app=SYNO.SDS.TaskScheduler.Script
run min=0
start month=0
can edit name=1
start year=0
can run from ui=1
repeat min=15

The task can also be displayed via a command line run as root (See here):  sudo synoschedtask –get id = 1

ID: [1]
Name: [Update Server Mon]
State: [enabled]
Owner: [root]
Type: [daily]
Start date: [0/0/0]
Run time: [0]:[0]
Repeat every [15] min (s) until [23]:[45]
Command: [/usr/local/bin/php73 /var/packages/MODS_ServerMonitor/target/ui/cron/status.cron.php]
Last Run Time: Mon Oct 28 23:00:02 2019
Status: [Success]


Synology Use opkg instead of ipkg on Synology

IPKG is not maintained anymore since 2014. As a replacement, one can use Entware, which offers more than 1800 packages.

Click to Read More

First enter a SSH session on your nas as root (See here).

Check if your CPU model is a armv5, armv7, mips, x86-32 or x86-64. You can do so with one of the following command:

  • cat /proc/cpuinfo | grep -m 1 ‘model name’ | cut -d “:” -f 2 | cut -d “@” -f 1
  • uname -a

Create now a folder to install Entware (NB.: The folder /opt may not yet exist. I.e.: Optware may not be installed yet. We will delete it if it exists. If it cannot be deleted – you could have a message that it’s in use – then reboot your Synology first).

mkdir -p /volume1/@entware-ng/opt
rm -rf /opt
ln -sf /volume1/@entware-ng/opt /opt

Depending on your CPU, execute one of the following commands

  • For armv5: wget -O – http://pkg.entware.net/binaries/armv5/installer/entware_install.sh | /bin/sh
  • For armv7: wget -O – http://pkg.entware.net/binaries/armv7/installer/entware_install.sh | /bin/sh
  • For mips: wget -O – http://pkg.entware.net/binaries/mipsel/installer/installer.sh | /bin/sh
  • For x86-32: wget -O – http://pkg.entware.net/binaries/x86-32/installer/entware_install.sh | /bin/sh
    For x86-64: wget -O – http://pkg.entware.net/binaries/x86-64/installer/entware_install.sh | /bin/sh

Go now to your DSM and open the “Control Panel”. There, select the “Task Scheduler” > “Create” > “Triggered Task” > “User-defined script”

Configure this new task to run at “Boot-up” :

And to run the following commands:

/bin/ln -sf /volume1/@entware-ng/opt /opt
/opt/etc/init.d/rc.unslung start

Finally, to include /opt/bin and /opt/sbin to the PATH variable, add . /opt/etc/profile at the end of /etc/profile with this command:

echo “. /opt/etc/profile” >> /etc/profile

You can now use the command opkg. The first action to do is: opkg update

Check the list of packages available with: opkg -list | more

 

 

NB.: To remove IPKG from your Synology:

  1. umount /opt
  2. rm -R /opt
  3. rm -R /volume1/opt or rm -R /volume1/@optware (depends on where IPKG was installed)
  4. delete every reference to optware in /etc/rc.local
  5. rm /etc/rc.optware
  6. check that there is nothing related to ipkg in /etc/crontab
  7. reboot your NAS

Synology Retrieve files and folders from a Synology C2 backup

My Nas Synology died recently and I wanted to retreive some content from my Synology C2 Backup.

Click to Read More

After 3 years being 24/7, my DS1815+ does not turn on anymore when I press the power button. The problem is not with the power supply. I did change it, but with no luck.

Fortunatelly, I was using Hyper Backup to daily compy everything into “Synology C2 backup”. 

But it’s not possible to retreive a complete foder from the backup via the Synology C2 web page. One can only download file by file :(

To download a complete folder at once, you need the “Synology Hyper Backup Explorer” for desktop available in the Destkop Utilities section of the downloads.