Synology Sync Plex Movies from Synology onto Unraid

I am managing my movies with Plex. It is installed both on my Synology NAS, which is running  24/7, and on my Unraid Server, that I turn on only for backup purpose.

I am usually adding new movies first on my Synology. I copy them later onto my Unraid Server. To do so, I am using rsync.

Click to Read More

ATTENTION: this is only to sync the files, not the metadata.


In each Plex, I have two libraries: Movies and Series TV

On Synology, each library includes two folders:

  • Movies includes /volume1/plex/Movies and /volume1/plex/new Movies
  • Series TV includes /volume1/plex/Series and /volume1/plex/new Series 

On Unraid, each library includes only one folder:

  • Movies includes /mnt/user/Movies
  • Series TV includes /mnt/user/Series

On Unraid, I have two shares Movies and Series to access respectively mnt/user/Movies and /mnt/user/Series.

On the Synology NAS, I have mounted the shares of the Unraid Server as CIFS Shared Folder:

  • /<Unraid Server>/Movies on /volume1/mount/Movies
  • /<Unraid Server>/Series on /volume1/mount/Series

Each time I have a new movie or serie, I copy it onto my Synology, respectively into /volume1/plex/new Movies or /volume1/plex/new Series.

All movies and series are first renamed using filebot. This is a guarantee that all are well uniquely identified with the title, year, resolution, season, episode, encoding, etc, ... According to Plex best practices, each movie is in its own folder.

Once I have a few new media, I turn on my Unraid Server and launch the following script in a SSH console (using Putty) on the Synology:


if grep -qs '/volume1/mount/Series ' /proc/mounts
rsync --ignore-existing -h -v -r -P -t /volume1/plex/New\ Series/ /volume1/mount/Series/
echo "Cannot sync new Series as Zeus is not mounted on /mount/Series"

if grep -qs '/volume1/mount/Movies ' /proc/mounts
rsync --ignore-existing -h -v -r -P -t /volume1/plex/New\ Movies/ /volume1/mount/Movies/
echo "Cannot sync new Movies as Zeus is not mounted on /mount/Movies"

Next, on Synology, I move all movies and series respectively from /volume1/plex/new Movies and /volume1/plex/new Series into /volume1/plex/Movies or /volume1/plex/Series (*)

Than, to be sure I don't have the same movie twice on the Unraid Server (with distinct encoding or resolution), I run this command in a SSH console on Unraid:

find /mnt/user/Movies -type f -iname '*.mkv' -printf '%h\n'|sort|uniq -c | awk '$1 > 1'^C

It does not work for the Series as each folder (season) contains obviously several episodes...


This is only syncing the files! There is no easy way to sync also the metadata between the two Plex.

But voilà....


(*) Doing so, the fine tunings done in Plex, when the movie was under </new Movie>, are not lost. Temporarily, the movie will appear as "deleted" in Plex. Above all, do not "Empty Trash" ! Soon (depending on how many movies you moved), it will be "available" again. I did test that trick explicitly:

1. Take a new movie:

2. Open it:

3. Check that path (here under, it is under /New Movies):

4. Edit some info for testing purpose (here under, the "Original Title"):

5. Change also the poster:

5. Using Windows Explorer or the File Station, move the folder of the movie into its new location. The movie will appear very soon as unavailable:

6. Open it:

7.  Wait... Soon it will become again available:

8. Check now the path (here under, it is now under /Movies):

9. As you can see, the chosen cover is still there. And editing the details, you would see that the original title is still "DEMO MOVE FOLDER within Plex".

SmartHome Easily Backup openHab into a Shared Folder

I am running openHab2 on my raspberry pi 4 using openhabian and I wanted to schedule a backup of my config into a shared folder of my Synology.

Backup as root

Openhabian's backup solution 'Amanda' didn't convince me to automate such a backup. So I wrote my own backup script and scheduled it with a cron job.

First, create a shared folder named "backups" on your Synology via Control Panel > Shared Folder > Create:

Next, create a user account named "backup" via Control Panel > User > Create:

Grant that account write access on the shared folder "backups" via Control Panel > User > backup > Edit > Read/Write

The Synology part being ready, move now to openhabian, on the RPI, using a ssh console (E.g.: Putty) to create and schedule the backup script. Most parts will have to be done as 'root' (to make it simpler... but no safer), so type;

sudo -i

If just like me you previously tried to configure the backup with Amanda, using the command "sudo openhabian-config" and then the menu 50 > 52, a mailer daemon (exim4) was probably installed and you want now to remove it... Check if it's running (not with the command "systemctl --type=service --state=running" but) with:

sudo service --status-all

If you see a + in front of exim4, disable and remove it

sudo systemctl stop exim4
sudo systemctl disable exim4
sudo apt-get remove exim4 exim4-base exim4-config exim4-daemon-light
sudo rm -r /var/log/exim4/

You can also next uninstall and remove Amanda

sudo apt remove amanda-client
sudo apt remove amanda-server
sudo rm -r /var/lib/amanda/
sudo rm -r /etc/amanda

Now, we can start with the preparation of the backup script. Define first a mount point on your RPI. E.g.: "/mnt/backups":

mkdir /mnt/backups

Define next the shared folder of your Synology by editing the fstab file:

sudo nano /etc/fstab 

Add the line here under in that file and save your change with CTRL-o, Enter, CTRL-x:

//<ip of your Synology>/backups /mnt/backups cifs username=backup,password=<password>,uid=1000,gid=1000,vers=3.0 0 0

Attention: the network is usually not yet available when the fstab file is used to mount the drives a boot time. So this shared folder will most probably not be mounted automatically!

Create a file:

nano /home/openhabian/

with the backup script here under:

# Backup Openhab of Synology

echo $(date) "Run openhab maintenance: $0" >> $log

if mountpoint -q /mnt/backups
echo $(date) "Synology's backups share is mounted." >> $log
echo $(date) "Synology's backups share is not mounted. Try to mount as per fstab definition." >> $log
sudo mount /mnt/backups
sleep 3
if mountpoint -q /mnt/backups
echo $(date) "Synology's backups share is now successfully mounted." >> $log
echo $(date) "Synology's backups share cannot be mounted." >> $log

if mountpoint -q /mnt/backups
# Keep the 10 last backups
rm -f $(ls -1t /mnt/backups/Raspberry/openhab2-backup-* | tail -n +11)
sudo ./runtime/bin/backup /mnt/backups/Raspberry/openhab2-backup-"$(date +"%Y_%m_%d_%I_%M").zip" >> $log
echo $(date) "custom backups of openhab completed." >> $log
echo "-----------------------------------------------------------------" >> $log

Make that script executable (for all users...)

sudo chmod a+x

To run that script as root on a regular basis, you have to schedule it as root (using now sudo explicitly if you didn't type sudo -i earlier) via crontab:

sudo crontab -e

If it's the first time you run crontab, you will have to pick your prefered editor. I advice nano ;)

Select an editor. To change later, run 'select-editor'.
1. /bin/nano <---- easiest
2. /usr/bin/vim.basic
3. /usr/bin/mcedit
4. /usr/bin/vim.tiny
5. /bin/ed

Choose 1-5 [1]: 1

In crontab, add this at the end and ave that change with CTRL-o, Enter, CTRL-x:

0 1 * * * /home/openhabian/

Notice: if you want to mount the shared drives at boot, which usually fails as mentioned previously as the network is not yet available when fstab is first called, you can add this in the crontab too:

@reboot sleep 300; mount -a

You can now try the script with:

sh /home/openhabian/

If it works, it should also work when triggered by the cron job.

Backup as openhabian

To run scripts as root is usually not recommended. But the backup script of openhab may only be run as root... We could run it with the account 'openhab', but the backup files will belongs to the user 'openhabian', making the cleanup tricky. I you really don't want to run and schedule my script as root, then the best option is to run it with the account "openhabian":

Still being is root mode (sudo -i), create the log file manually and grant access for all users:

touch /var/log/maintenance.log
chmod a+rw /var/log/maintenance.log


Authorize the user "openhabian" to execute the backup script "/usr/share/openhab2/runtime/bin/backup". To do this, you have to create a file in the /etc/sudoers.d folder. All files in that folder are used by the "sudo" command to authorize configured users to execute specified commands as root, with or without password. You MUST ABSOLUTELY edit that file with the command "visudo". This one will check that your changes are valid. If you edit that file with another editor and it contains an error, you won't be able to use the "sudo" command anymore (you will have to plug the SD card into a USB adapter on another raspberry to fix the issue or to simply delete the invalid file. USB device are automatically mounted under /media/usbxxx if you installed the package usbmount).

visudo /etc/sudoers.d/openhab

In that file, add the line here under and save your change with CTRL-o, enter, CTRL-x

# Allow openhabian user to execute the backup script
openhabian ALL=(ALL) NOPASSWD: /bin/mount, /usr/share/openhab2/runtime/bin/backup


Unschedule the script from root's crontab (remove the line added with crontab -e)

crontab -e
0 1 * * * /home/openhabian/


And schedule it now within openhab's crontab (has to be done as 'openhabian'  user)

sudo -u openhabian crontab -e

And add

0 1 * * * /home/openhabian/


Et voilà.


PS.: If you experience issues when mounting the remote shared folder, try to mount it interactively (using an administration account of  your Synology or an account having a password without symbols such as %, # or !)

apt install smbclient 
smbclient //<remote ip>/<shared folder> -U <user account>

You can also check the latest messages from the kernel

dmesg | tail -n10

Tips Easy replacement or upgrade of HTC Vive Headset's lenses

I did scratch the lenses of my HTC Vive Pro while keeping my glasses to play. I couldn't find original replacements lenses but realize that I could simply use the lenses of my Samsung GearVR instead. And the result is amazing !

Click to Read More

As you know if you googled for information about the lenses of the HTC Vive, those are "Fresnel lenses". I.e.: a set of flat surfaces with different angles. Many people suffers of "God Ray" artifacts with those lenses. I personally found indeed the visual experience as quite poor for such a high-end headset.

The good news is that it's very easy to replace those lenses. And it can be done:

  1. Not only with cheap 3D printed adaptors to reuse the lenses of a old Samsung GearVR.
  2. But also with prescription lenses adapted to your eyes if required.

First, a lot of information can be found on Google with the keywords : HTC Vive GearVR lens Mod.

If you need prescription lenses, simple google for HTC Vive prescription lenses. There are plenty of serious sites and they provide both the lenses and the adaptors:

If you want to reuse the lenses of an old Samsung GearVR:

  • You need to print your own adaptors. Plans can be found on Thingiverse. Ex.: here or here.
  • Or you can buy them on internet. There are plenty on eBay (with or without lenses). Search for HTC Vive Mod Upgrade Kit
  • The adapters are the same for the HTC Vive, HTC Vive Pro and HTC Cosmos also.
  • The Samsung GearVR lenses MAY NOT BE those from the 2015 version (the white model). Those are smaller. You need a dark blue version of 2016 or 2017!

There are many How-to on the web, such as this video which is quite complete for HTC Vive or this one for HTC Cosmos.

Notice next that opposite to what is illustrated on some video, I had no need to unscrew anything to pop out the lenses. Neither on the GearVR, nor on the HTC Vive.

The result is really fairly dramatic!

Et voilà.

Wordpress Plugin "Move WordPress Comment" not working anymore

I just noticed that this really great plugin "Move WordPress Comment" was failing when trying to move comments. Fortunately, the fix was easy.

Click to Read More

This plugin is very useful when you started a discussion thread (I.e.: you reply on a comment), but the person does not answer on the last comment. Instead, he starts a new comment. In such a case, the plugin can be used to move his last comment under the last reply in the discussion thread;

Example, here under, I could attach the second discussion thread under the last reply of the first discussion thread by typing the id #46705 of that last reply into the "parent comment" of the first comment #46719 of the second discussion thread and clicking "Move".

Unfortunately, this plugin is now returning an error "Uncaught ArgumentCountError: Too few arguments to function wpdb::prepare()"

The fix is really simple. Go to your WordPress Dashboard, under the menu "Plugins" and select the "Plugin Editor".

Next, in the top-right corner, set "select plugin to edit" = "Move WordPress Comment" and click "Select".

Then, go to line 63 or search for "prepare". This methods requires 2 parameters. So, in the where clauses of the SQL Update statements, replace the variables by %s and move the variables into a second parameters.

It should result into this:

// move to different post
if ( $postID != $oldpostID ) {
$wpdb->query( $wpdb->prepare( "UPDATE $wpdb->comments SET comment_post_ID=$postID WHERE comment_ID=%s;", "$commentID") );

// Change post count
$wpdb->query( $wpdb->prepare( "UPDATE $wpdb->posts SET comment_count=comment_count+1 WHERE ID=%s", "$postID" ) );
$wpdb->query( $wpdb->prepare( "UPDATE $wpdb->posts SET comment_count=comment_count-1 WHERE ID=%s", "$oldpostID" ) );

// move to different parent
if ( $parentID != $commentID ) {
$wpdb->query( $wpdb->prepare( "UPDATE $wpdb->comments SET comment_parent=$parentID WHERE comment_ID=%s;", "$commentID" ) );

Finally, click on "Update File", at the bottom of the Plugin Editor.

Et voilà,

Hardwares Flash LSI 9211-8i and LSI 9201-16i on Asus P9X79 WS

I did move my LSI cards 9211-8i and 9201-16i from my Asus Striker II Formula onto a P9X79 WS. Unfortunately, the MB couldn't boot with both card plugs. I did upgrade the firmware and erased the bios of both cards to solve the problem.

Click to Read More

Actually, everything was working fine until I decided to upgrade the bios of my Asus P9X79 WS from the version 48022 of 2015 to the very latest 4901 of 2018 (An attempt to solve an issue with DIMM not recognized.. Attempt failed... I think the issue is actually with the CPU not supporting Quad Channel). Bref.

After this upgrade, the Asus MB didn't boot anymore. There was only a cursor blinking in the upper left corner. Clearly, IMO, the cursor of the LSI bios.

  • Booting without any LSI card was ok.
  • Booting with only the LSI 9211-8i was ok.
  • Booting with only the LSI 9201-16i was ok.
  • But booting with both LSI was failing although I tried to move them in all the various PCI-E x16 slots.

So, I decided to attempt the upgrade of both LSI cards (currently with firmware IT version and Bios But while preparing the USB key to do this, reading a few articles, I realized that I could also completely erase the BIOS as I don't boot from a disk on any of those Cards.

So, here is what I did:

  1. Format a USB key as FAT32
  2. Download the legacy firmwares (and bios if you want) for the LSI cards from here (with Product Group = Legacy Product and Product Family = Legacy Host Bus Adapters)
    1. for 9201-16i, it's here. You need two files
      2. Installer_PXX_for_UEFI
    2. for 9211-8i, it's here. You need two files:
      1. 9211-8i_Package_Pxx_IR_IT_FW_BIOS_for_MSDOS_Windows
      2. Installer_PXX_for_UEFI (which is actually the same file as for 9201-16i)
  3. From the Installer_PXX_for_UEFI, extract the file /sas2flash_efi_ebc_rel/sas2flash.efi and copy it on the USB Key root
  4. From the xxx_BIOS_for_MSDOS_Windows, extract
    1. the firmwares /Firmware/HBA_92xx_IT/xxx.bin and copy them in the USB Key root:
      1. 9201-16i_it.bin
      2. 2118it.bin
    2. the bios sasbios_rel/mptsas2.rom (it's the same bios for both cards) and copy it in the USB Key root
  5. Now, you need a UEFI shell which works with the P9X79. This one for example or the v2 to be found here didn't work for me. Instead, I was lucky with this one:
    1. download this Shell_Full.efi on your USB Key root
    2. and rename it Shellx64.efi
    3. for some users, they have to copy if from the root into sublfolders like:
      1. /efi/boot/Shellx64.efi
      2. /boot/efi/Shellx64.efi
  6. Plug the USB Key into your Asus P9X79 WS. For safety, remove all other USB keys and disconnect all HDD/SSD.
  7. Plug next only one SLI Card and upgrade it. Once done, remove it,n plug the other card and upgrade it too.

To upgrade one SLI Card with your Asus P9X79 WS:

  1. Boot the PC and go into the Bios with Del or F2.
  2. Go into the "Advanced mode" (F7 or click on the button bottom left)
  3. Click on the "Boot" menu (upper right)  and scroll down until you see "Secure Boot" (screenshot 1)
  4. Enter into "Secure Boot" and select "OS Type" = "Other OS" (screenshot 2). This is required to be able to use the UEFI Shell later
  5. Press F10 to "Save and Exit" (screenshot 3)
  6. Go back into the Bios with Del or F2.
  7. Go into the "Advanced mode" (F7 or click on the button bottom left)
  8. Click on the "Exit" menu (upper right)
  9. Select "Launch EFI Shell from filesystem device" (screenshot 4)
    1. if you have only the USB key prepared previously plugged into the PC, you should soon see the EFI Shell loading
  10. Once the EFI Shell loaded, move to the USB Key and list its content with the commands fs0: and next ls. If you have several USB keys, you need to move onto the right one. You can use the command: map -b to list all available disks and identify the correct one. Ex.: if the correct one is "fs1 :Removable HardDisk - … USB(…)", then you can move onto it with fs1:. You can break stop the map command by typing q.
  11. Now check the versions of your SLI card with sas2flash -list
  12. Remove the Bios with the erase command and the parameters 5 as documented in SAS2Flash_ReferenceGuide.pdf (screenshot 5) : sas2flash -o -e 5
  13. Possibly also erase the controller flash memory (I hadn't to do it) with the command: sas2flash.efi -o -e 6
  14. Type again sas2flash -list to verify the Bios Version now reads N/A
  15. To load a new firmware, depending on the LSI card, type :
    1. sas2flash.efi -o -f 9201-16i_it.bin
    2. sas2flash.efi -o -f 2118it.bin
  16. To load the bios, type: sas2flash.efi -o -b mptsas2.rom

Do this for each SLI card, one at a time.




Screenshot 1: Secure Boot

Screenshot 2: Other OS

Screenshot 3: Save and Exit

Screenshot 4: Launch EFI Shell

Screenshot 5: Erase command for sas2flash

Raspberry Pi How to remote access MySQL on openHabian (RPI 4)

I wanted to use phpMyAdmin on a Synology to access a MySQL running on a RPI with openHabian. Here is my how-to:

Click to Read More

First connect on your openHabian using a ssh console.

Obviously, you need MySQL to be installed and configured:

sudo apt update

sudo apt upgrade

sudo apt install mariadb-server

sudo mysql_secure_installation

Then, double check that MySQL is running and listening on port 3306
netstat -plantu | grep 3306

If nothing is displayed by this command, MySQL is not listening on the port 3306.

Enter MySQL as root with the command:

sudo mysql -uroot -p

Check the port used by MySQL


Then, type the following MySQL commands to create an account and a database, and grant both local and remote access for this account on the database:

CREATE USER '<YourAccount>'@'localhost' IDENTIFIED BY '<YourPassword>';


GRANT ALL PRIVILEGES ON <YourDatabase>.* TO '<YourAccount>'@'localhost';

GRANT ALL PRIVILEGES ON *.* to '<YourAccount>'@'169.254.0.%' identified by '<YourAccountPassword>' WITH GRANT OPTION;


Here above, I do grant access from all machines in my local network with '169.254.0.%'. One can restrict access to one machine with its specific address, such as : ''

Now, edit 50-server.cnf and configure MySQL to not listen anymore on its local IP only (simply comment the line bind-address) :

sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf

# Instead of skip-networking the default is now to listen only on
# localhost which is more compatible and is not less secure.
#bind-address =

Finally, restart MySQL for the changes above to be applied:

sudo service mysqld restart

You can now edit the config of phpMyAdmin to access the MySQL on your RPI. If it is running on Synology, look here.

Synology How to Add Multiple Hosts in phpMyAdmin on Synology

On Google, one can easily find how to add servers in the list presented on the login page of phpMyAdmin. But thes results don't apply if you are using the package 'phpMyAdmin' for Synology. With that package, one must edit the synology_server_choice.json file.

Click to Read More

If you are connected on your NAS via a SSH console, the file to be edited is located in /var/services/web/phpMyAdmin/synology_server_choice.json.

But you should also be able to access it from a Windows PC on \\<YourNAS>\web\phpMyAdmin\synology_server_choice.json

To add a server, simply duplicate the first statement of the json file, separated with a comma:

{"verbose":"Server 1","auth_type":"cookie","host":"localhost","connect_type":"socket","socket":"\/run\/mysqld\/mysqld10.sock","compress":false,"AllowNoPassword":false},
{"verbose":"Server 2","auth_type":"cookie","host":"","connect_type":"socket","socket":"\/run\/mysqld\/mysqld10.sock","compress":false,"AllowNoPassword":false},
{"verbose":"Server 3","auth_type":"cookie","host":"","connect_type":"socket","socket":"\/run\/mysqld\/mysqld10.sock","compress":false,"AllowNoPassword":false}

Et voilà.

Raspberry Pi Install Java 8 SDK and OpenHab 2 on Raspberry Pi Desktop for RPI 4

I wanted to install OpenHab 2 on my RPI 4 which is running the latest Raspberry Pi Desktop. But I was missing Java 8 which is a prerequisite and unfortunately not available anymore as a stable version, for Debian 10, due to a security issue.

Click to Read More

First, here is the version of Raspberry Pi Desktop I have:

$ cat /etc/os-release
PRETTY_NAME="Raspbian GNU/Linux 10 (buster)"
NAME="Raspbian GNU/Linux"
VERSION="10 (buster)"

Trying to install Java 8 SDK was resulting in errors like:

$ sudo apt-get install openjdk-8-jdk
Reading package lists... Done
Building dependency tree
Reading state information... Done
E: Unable to locate package

Or like:

Reading package lists... Done
Building dependency tree
Reading state information... Done
Package openjdk-8-jdk is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
However the following packages replace it:
E: Package 'openjdk-8-jdk' has no installation candidate

My Package sources were:

deb buster main contrib non-free
deb buster/updates main contrib non-free
deb buster-updates main contrib non-free

The solution was to add a new source with the 'unstable' arm-hf packages in /etc/apt/sources.list.d/raspi.list ('sid' is the codename for unstable):

$ echo 'deb sid main' | sudo tee -a /etc/apt/sources.list.d/

Next, do:

$ sudo apt-get update
$ sudo apt install gcc-8-base
$ sudo apt-get install openjdk-8-jdk

NB.: without installing gcc-8-base, you would get an error like this :

Reading package lists... Done
Building dependency tree
Reading state information... Done
Some packages could not be installed. This may mean that you have
requested an impossible situation or if you are using the unstable
distribution that some required packages have not yet been created
or been moved out of Incoming.
The following information may help to resolve the situation:

The following packages have unmet dependencies:
libc6-dev : Breaks: libgcc-8-dev (< 8.4.0-2~) but 8.3.0-6+rpi1 is to be installed
E: Error, pkgProblemResolver::Resolve generated breaks, this may be caused by held packages.


Now, you can install OpenHab 2:

$ wget -qO - '' | sudo apt-key add -
$ sudo apt-get install apt-transport-https
$ echo 'deb stable main' | sudo tee /etc/apt/sources.list.d/openhab2.list
$ sudo apt-get update

If you get an error like this one:

E: The repository ' unstable Release' is not signed.
N: Updating from such a repository can't be done securely, and is therefore disabled by default.
N: See apt-secure(8) manpage for repository creation and user configuration details.

Then do:

$ echo 'deb [trusted=yes] stable main' | sudo tee /etc/apt/sources.list.d/openhab2.list

Finally, do:

$ sudo apt-get install openhab2
$ sudo apt-get install openhab2-addons
$ sudo systemctl daemon-reload
$ sudo systemctl enable openhab2.service
$ sudo adduser openhab dialout
$ sudo adduser openhab tty

Edit /etc/default/openhab2 to add access for Java to the serial ports (ex.:for Zwave keys)

$ nano /etc/default/openhab2 


It should output:

openhab2.service - openHAB 2 - empowering the smart home
Loaded: loaded (/usr/lib/systemd/system/openhab2.service; disabled; vendor preset: enabled)
Active: active (running) since Wed 2020-07-15 21:57:07 BST; 28min ago
Main PID: 26101 (java)
Tasks: 101 (limit: 4915)
Memory: 212.3M
CGroup: /system.slice/openhab2.service
└─26101 /usr/bin/java -Dopenhab.home=/usr/share/openhab2 -Dopenhab.conf=/etc/openhab2 -Dopenhab.runtime=/usr/share/openhab2/runtime -Dopenhab.userdata=/var/lib/openhab2 -Dopenhab.logdir=/var/log/openhab2

Jul 15 21:57:07 Helios systemd[1]: Started openHAB 2 - empowering the smart home.


As far as I am concerned, I share via smb the various folders

Edit /etc/samba/smb.conf

$ sudo nano /etc/samba/smb.conf

[openHAB2-userdata]comment=openHAB2 userdata
only guest=no
create mask=0777
directory mask=0777

[openHAB2-conf]comment=openHAB2 site configuration
only guest=no
create mask=0777
directory mask=0777

[openHAB2-logs]comment=openHAB2 logs
only guest=no
create mask=0777
directory mask=0777

[openHAB2-backups]comment=oepnHAB2 backups
only guest=no
create mask=0777
directory mask=0777

Restart the Samba service:

$ sudo systemctl restart smbd.service


Start openHab with:

$ sudo systemctl start openhab2.service
$ sudo systemctl status openhab2.service

It can take 15' to be initialized, but soon you should be able to access openHab on your RPI on port 8080!


Do a backup with:

$ sudo ./usr/share/openhab2/runtime/bin/backup

Restore a backup with:

$ sudo systemctl stop openhab2.service
$ sudo ./usr/share/openhab2/runtime/bin/restore /var/lib/openhab2/backups/openhab2-backup-....
$ sudo systemctl start openhab2.service

It will take long minutes to restart!


More details about installing openHAb on Linux on the official page.

Et voilà!

Raspberry Pi Use a Z-Wave Controller USB Key with openHAB in Docker on a RPI 4

It took me quite some hours to be able to use my Aeotec Z-Stick Gen5 (ZW090) key within my Docker image of openHAB, running on a Raspberry Pi 4 (with a Raspberry Pi OS). Here are all the tips I used.

Click to Read More

First, before plugging the controller  in your RPI, configure it to see the Serial Ports. Connect onto your RPI within a SSH Console (ex.: via Putty)  and type the command:

sudo raspi-config

Use "5 interfacing Options" > "P6 Serial" > "Yes" > "Ok"

And now reboot.

Next, back into a SSH conscole, check what USB devices already exists with the command:


Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

And flush the kernel and boot logs:

sudo dmesg -c >> ~/dmesg-`date +%d%m%Y`.log

Then, plug your Z-Wave Controller USB Key in a USB Port and check that it's detected and mounted properly:


Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 009: ID 0658:0200 Sigma Designs, Inc. Aeotec Z-Stick Gen5 (ZW090) - UZB
Bus 001 Device 002: ID 2109:3431 VIA Labs, Inc. Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub


[ 3124.779069] usb 1-1.4: new full-speed USB device number 9 using xhci_hcd
[ 3124.919928] usb 1-1.4: New USB device found, idVendor=0658, idProduct=0200, bcdDevice= 0.00
[ 3124.919942] usb 1-1.4: New USB device strings: Mfr=0, Product=0, SerialNumber=1
[ 3124.919953] usb 1-1.4: SerialNumber: 32303136-3131-3033-3030-303031383932
[ 3124.926704] cdc_acm 1-1.4:1.0: ttyACM0: USB ACM device


"lsusb" should show you a new device. Ex.: Aeotec Z-Stick Gen5 (ZW090) - UZB.

And "dmesg" must should you the mount point: cdc_acm 1-1.4:1.0: ttyACM0: USB ACM device.

If you don't see the mount point, then you possibly have a device not supported by the RPI 4. It seems that it is the case with the old Aeotec "Z-Stick Gen5". "New Z-Stick Gen5" and "Z-Stick Gen5+" should however be compatible. But there is a trick: plug your key on the RPI 4 via a USB HUB (2.0 or 3.0).

I presume that the Docker Image is already up and running. If not, install it.

sudo useradd -r -s /sbin/nologin openhab
usermod -a -G openhab pi
mkdir /opt/openhab
mkdir /opt/openhab/conf
mkdir /opt/openhab/userdata
mkdir /opt/openhab/addons
chown -R openhab:openhab /opt/openhab

Check the id of the user openhab with:

id openhab

uid=999(openhab) gid=994(openhab) groups=994(openhab)

Grant access on the Serial Port for the user 'openhab':

sudo chmod 777 /dev/ttyACM0
sudo chown openhab /dev/ttyACM0

And use the uid and gid found above in the following command, setting the ttyA* found previously and specifying the version to be used:

docker run --name openhab --net=host --device=/dev/ttyACM0 -v /etc/localtime:/etc/localtime:ro -v /etc/timezone:/etc/timezone:ro -v /opt/openhab/conf:/openhab/conf -v /opt/openhab/userdata:/openhab/userdata -v /opt/openhab/addons:/openhab/addons -d -e USER_ID=<uid> -e GROUP_ID=<gid> --restart=always openhab/openhab:latest


Now, using Portainer (because it's easy), open a console within openhab... Portainer is not yet installed ? Do it with:

docker run -d -p 9000:9000 -p 8000:8000 --name portainer1 --restart always -v /var/run/docker.sock:/var/run/docker.sock -v portainer:/data portainer/portainer:latest

Go to the page http://<Your RPI IP>:9000, open the Containers and click on the "Exec Console" icon of 'openhab' container:

Grant the same accesses inside the image than on the RPI:

sudo chmod 777 /dev/ttyACM0
sudo chown openhab /dev/ttyACM0

chown -R openhab:openhab /opt/openhab

Now restart the 'openhab' container (with the icon Restart ;) ). It will take some minutes to be available. But once you can get into it, go to the Things and configure the Controller to use the Serial Port ttyACM0:


Et voilà

Raspberry Pi Raspberry Pi's SD card full ?

Trying to update one of my Raspberry Pi Desktop, I see messages pretending that there is not enough free space ? 

Click to Read More

I saw that my Pi was full when I tried to update it with

sudo apt-get update && sudo apt-get upgrade -y

 Error writing to output file - write (28: No space left on device)

I could also see that there was no free storage anymore as the system was unable to allocate the swap file:

systemctl status dphys-swapfile.service

dphys-swapfile.service - dphys-swapfile - set up, mount/unmount, and delete a swap file
Loaded: loaded (/lib/systemd/system/dphys-swapfile.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2020-06-28 12:35:15 BST; 4min 21s ago
[...]Jun 28 12:35:15 helios dphys-swapfile[327]: want /var/swap=100MByte, restricting to 50% of remaining disk size: 0MBytes
Jun 28 12:35:15 helios systemd[1]: Failed to start dphys-swapfile - set up, mount/unmount, and delete a swap file.

And indeed, the swap file was 0B:

free -h

      total   used   free   shared  buff/cache  available
Mem: 3.8Gi 285Mi 2.7Gi 11Mi    886Mi       3.4Gi
Swap: 0B      0B     0B

Start to investigate with:

sudo df -h

Filesystem      Size  Used Avail Use% Mounted on
/dev/root 29G 29G 0 100% /
devtmpfs 1.8G 0 1.8G 0% /dev
tmpfs 2.0G 0 2.0G 0% /dev/shm

Or use the version for Inode which won't include the mounted drives

sudo df -i

Filesystem Inodes  IUsed  IFree   IUse% Mounted on
/dev/root 1895552 328167 1567385 18% /
devtmpfs 117763 409 117354 1% /dev
tmpfs 183811 1 183810 1% /dev/shm

Check, in the output of those commands, that the size of the root partition (/dev/root) is close the the size of your SD (Here above, I have IUsed =~ 32G). If it is not the case, enlarge it with:

sudo raspi-config

7 Advanced Options > A1 Expand Filesystem 


If the size of the root partition is maximum, then investigate to find the very large stuff with:

sudo du -xh / | grep -P "G\t"

1,2G /opt/openhab/userdata
1,2G /opt/openhab
1,2G /opt
1,3G /usr
11G /var/lib/docker
11G /var/lib
11G /var
16G /mnt/backup
16G /mnt
29G /

To only have the size of the first level folders, use:

sudo du -xh --max-depth=1 / | grep -P "G\t"

1.2G /opt
1.3G /usr
11G /var
16G /mnt
29G /

Check, in the output of that command, if there as any folder or file which could be cleaned-up.

You can also us this to navigate into your SD:

sudo mount --bind /  /mnt
sudo ncdu -x /mnt

-- /mnt ------------------------
15,1 GiB [##########] /mnt
10,6 GiB [###### ] /var
1,3 GiB [ ] /usr
1,2 GiB [ ] /opt
415,3 MiB [ ] /home
353,3 MiB [ ] /lib
9,3 MiB [ ] /bin

You can navigate in this table (with keys up and down) and open folders (press enter) to see the details of their content. Exit this table by pressing "q".

If by any accident, you don't succeed to delete a large file or folder (especially if located under /media or /mnt), check that it's not on a mounted drive. Auto-mount are usually in /etc/fstab and may only be :

cat /etc/fstab

You can umount all at once with

sudo umount -a -t cifs -l


Regarding packages, You can cleanup some space with:

sudo apt-get autoremove

sudo rm -R /var/cache/

sudo mkdir -p /var/cache/apt/archives/partial
sudo touch /var/cache/apt/archives/lock
sudo chmod 640 /var/cache/apt/archives/lock
sudo apt-get clean


If you are using docker,

you can check the space consumed with:

sudo du -sh /var/lib/docker/overlay2

docker system df

you can cleanup some space with:

docker system prune -a -f

docker system prune --all --volumes --force

docker volume rm $(docker volume ls -qf dangling=true)


If using GitLab, you can cleanup some space with:

sudo gitlab-ctl registry-garbage-collect


Other useful commands:

sudo find / -type f -size +500M -exec ls -lh {} \;

sudo touch /forcefsck ; sudo reboot

sudo resize2fs /dev/mmcblk0p2 ; sudo reboot

After reboot check the resize status with:

systemctl status resize2fs_once.service

If you see the error "Failed to start LSB: Resize the root filesystem to fill partition", you can disable the resize2fs with:

sudo systemctl disable resize2fs_once


What was the issue in my own case: my remote folder for backup was not mounted anymore on /mnt/backup but the backup script run and stored a 16Gb file into the local folder /mnt/backup. I have been able to delete the local backup by commenting the related shared folder in /ect/stab, rebooting and then deleting /mnt/backup.

Et voilà!