Author: vletroye

  • USB ports used by openHab’s Z-Wave Controller change after each reboot on RPI

    Usually, USB keys get assigned a new port such as /dev/ttyACM0, /dev/ttyACM1, etc…, each time they are unplugged and replugged into the RPI or if the RPI reboots. A solution consists in making these ports permanent via symlinks.

    Click to Read More

    This “feature” is a problem as the specified device may not be recognized anymore by configured softwares, like the Z-Wave Binding or openHab.

    See a definitive solution here: make serial USB ports persistent via symlinks.

    Loading

  • Sudoer file not working on Synology due to dots in its name

    I spent one hour to investigate why I couldn’t execute a command with sudo, from a php script, although the user was authorized for that command within a sudoer file… The problem was a dot in the name of the sudoer file.

    Click to Read More

    My php script is part of a package I have created to run on my Synology (DSM 7.x).. It is running under an account named like my package: MODS_Package7.x

    That php script executes the following code:

    $COMMAND = “sudo /usr/syno/bin/synopkg start ‘$PACKAGE’ 2>&1”;
    exec($COMMAND, $output, $result);

    My sudoer file was named /etc/sudoers.d/MODS_Package7.x and contained:

    MODS_Package7.x ALL=(ALL) NOPASSWD: /usr/syno/bin/synopkg

    It didn’t work until I removed the “.”, renaming the sudoer file into /etc/sudoers.d/MODS_Package7_x

     

    How stupid,  but it’s indeed mentioned in the documentation:

    sudo will read each file in /etc/sudoers.d, skipping file names that end in ‘~’ or contain a ‘.’ character to avoid causing problems with package manager or editor temporary/backup files.

    The /etc/sudoers.d/README file does not exist on Synology, but can be found on other Linux

    
    #
    # As of Debian version 1.7.2p1-1, the default /etc/sudoers file created on
    # installation of the package now includes the directive:
    # 
    #   #includedir /etc/sudoers.d
    # 
    # This will cause sudo to read and parse any files in the /etc/sudoers.d 
    # directory that do not end in '~' or contain a '.' character.
    # 
    # Note that there must be at least one file in the sudoers.d directory (this
    # one will do), and all files in this directory should be mode 0440.
    # 
    # Note also, that because sudoers contents can vary widely, no attempt is 
    # made to add this directive to existing sudoers files on upgrade.  Feel free
    # to add the above directive to the end of your /etc/sudoers file to enable 
    # this functionality for existing installations if you wish!
    #
    # Finally, please note that using the visudo command is the recommended way
    # to update sudoers content, since it protects against many failure modes.
    # See the man page for visudo for more information.
    #

     

    Loading

  • Run a command as root on Synology with any user

    Synology is reinforcing the security within its DMS, making more difficult the execution of scripts as root from packages. Here is my trick to do so based on the use of ssh and php.

    Click to Read More

    Theoretically, to run something as root, a user must be a sudoer, i.e.: a user having the right to execute commands as root. It’s not difficult to configure a user to be a sudoer. You simply have to add that user into a file located under /etc/sudoers.d/, with a few parameters to describe what he can execute and if he requires to type his password or not. Ex.: to let a user named “beatificabytes” able to execute the command ‘shutdown’ on a Synology:

    1. Create a file (the name does not matter): /etc/sudoers.d/beatificabytes 
    2. Edit that file with vi (or nano if you installed that package). Notice that you should really be careful as any error in that file will prevent you to log in anymore:  DANGER !!! That’s why it is usually highly recommended to use the command ‘visudo’ to edit such files (it checks the syntax before saving changes)… Unfortunately, this command is not available on Synology.
    3. To grant rights without having to type a password, type this: beatificabytes ALL=(ALL) NOPASSWD: /sbin/shutdown

    The problem I have with this approach is that I have many various users (each package runs with its own user) and I don’t want to define them all as sudoers.

    One option would be to run all packages with the same user who is defined as a sudoer. This is possible via a privilege file to be added into the packages (/conf/privilege).

    But if like me you want to always use the same administrator account, who is already a sudoer, if you have php installed on your NAS and if you have enabled ssh, then your could simply open a ssh session with that admin account, from a php script, to execute your commands.

    Here is the php script I am using for such a purpose (saved into a file named ‘sudo.php’):

    <?php
    $options = getopt("u:p:s:o:c:");
    $user = $options['u'];
    $password = $options['p'];
    $server = $options['s'];
    $port = $options['o'];
    $command = $options['c'];
    
    if (!function_exists("ssh2_connect")) die("php module ssh2.so not loaded");
    if(!($con = ssh2_connect($server, $port))){
        echo "fail: unable to establish connection to '$server:$port'\n";
    } else {
        // try to authenticate with username password
        if(!ssh2_auth_password($con, $user, $password)) {
    	if (strlen($password) > 2){
    		$pass = str_repeat("*",strlen($password)-2);
    	} else {
    		$pass = "";
    	}
    	$pass =  substr($password, 0,1).$pass.substr($password, -1);
            echo "fail: unable to authenticate with user '$user' and password '$pass'\n";
        } else {
            // allright, we're in!
            echo "okay: logged in...\n";
    
            // execute a command
    	$command = "echo '$password' | sudo -S $command 2>&1 ";
            if (!($stream = ssh2_exec($con, $command ))) {
                echo "fail: unable to execute command '$command'\n";
            } else {
                // collect returning data from command
                stream_set_blocking($stream, true);
                $data = "";
                while ($buf = fread($stream,4096)) {
                    $data .= $buf;
    		echo $buf;
                }
                fclose($stream);
            }
        }
    }
    ?>
    

    And to call it, from a shell for example, assuming that your are using php7.3, type something like:

    php -dextension=/volume1/@appstore/PHP7.3/usr/local/lib/php73/modules/ssh2.so sudo.php -u YourAdminAccount -p YourAdminPassword -s 127.0.0.1 -o 22 -c "whoami"
    

    Here above

    • 22 is the port defined for ssh on my Synology
    • I executed the command ‘whoami’. So, the outcome will be “root”

    Et voilà.

    Loading

  • Low dialog volume with DTS movies on RealTek compared to AC3

    A simple trick to fix the low dialog volumes of a DTS audio track played with a Realtek is to use the Loudness Equalization.

    Click to Read More

    I am using Home Media Player Classic x64 v1.9.8 (24-10-2020) to play movies on my Windows 10.  My Motherboard is a Z270 Gaming Pro Carbon with a Realtek® ALC1220 Codec.

    When playing movies with a DTS audio track, the volume is always really low compared to movies with an AC3 audio track, especially regarding the dialogs.

    It was not the case when I initially installed Windows so I presume that this is a regression due to some driver updates… But a simple trick was to enable the Loudness Equalization via the Realtek HD Audio Manager.

    If you don’t know how to open the Realtek HD Audio Manager, open it directly from Windows Explorer or with Win+R: C:\Program Files\Realtek\Audio\HDA\RtkNGUI64.exe

    [EDIT 23/05/2021] This trick didn’t solve the issues with Netflix. The voices were really inaudible. I had to switch the sound of Netflix from 5.1 to 2.0. I am sure there is another solution. But I didn’t find it yet.

    Et voilà,

    Loading

  • Sync Plex Movies from Synology onto Unraid

    I am managing my movies with Plex. It is installed both on my Synology NAS, which is running  24/7, and on my Unraid Server, that I turn on only for backup purpose.

    I am usually adding new movies first on my Synology. I copy them later onto my Unraid Server. To do so, I am using rsync.

    Click to Read More

    ATTENTION: this is only to sync the files, not the metadata.

     

    In each Plex, I have two libraries: Movies and Series TV

    On Synology, each library includes two folders:

    • Movies includes /volume1/plex/Movies and /volume1/plex/new Movies
    • Series TV includes /volume1/plex/Series and /volume1/plex/new Series 

    On Unraid, each library includes only one folder:

    • Movies includes /mnt/user/Movies
    • Series TV includes /mnt/user/Series

    On Unraid, I have two shares Movies and Series to access respectively mnt/user/Movies and /mnt/user/Series.

    On the Synology NAS, I have mounted the shares of the Unraid Server as CIFS Shared Folder:

    • /<Unraid Server>/Movies on /volume1/mount/Movies
    • /<Unraid Server>/Series on /volume1/mount/Series

    Each time I have a new movie or serie, I copy it onto my Synology, respectively into /volume1/plex/new Movies or /volume1/plex/new Series.

    All movies and series are first renamed using filebot. This is a guarantee that all are well uniquely identified with the title, year, resolution, season, episode, encoding, etc, … According to Plex best practices, each movie is in its own folder.

    Once I have a few new media, I turn on my Unraid Server and launch the following script in a SSH console (using Putty) on the Synology:

    #!/bin/bash

    if grep -qs '/volume1/mount/Series ' /proc/mounts
    then
    rsync --ignore-existing -h -v -r -P -t /volume1/plex/New\ Series/ /volume1/mount/Series/
    else
    echo "Cannot sync new Series as Zeus is not mounted on /mount/Series"
    fi

    if grep -qs '/volume1/mount/Movies ' /proc/mounts
    then
    rsync --ignore-existing -h -v -r -P -t /volume1/plex/New\ Movies/ /volume1/mount/Movies/
    else
    echo "Cannot sync new Movies as Zeus is not mounted on /mount/Movies"
    fi

    Next, on Synology, I move all movies and series respectively from /volume1/plex/new Movies and /volume1/plex/new Series into /volume1/plex/Movies or /volume1/plex/Series (*)

    Than, to be sure I don’t have the same movie twice on the Unraid Server (with distinct encoding or resolution), I run this command in a SSH console on Unraid:

    find /mnt/user/Movies -type f -iname '*.mkv' -printf '%h\n'|sort|uniq -c | awk '$1 > 1'^C

    It does not work for the Series as each folder (season) contains obviously several episodes…

     

    This is only syncing the files! There is no easy way to sync also the metadata between the two Plex.

    But voilà….

     


    (*) Doing so, the fine tunings done in Plex, when the movie was under </new Movie>, are not lost. Temporarily, the movie will appear as “deleted” in Plex. Above all, do not “Empty Trash” ! Soon (depending on how many movies you moved), it will be “available” again. I did test that trick explicitly:

    1. Take a new movie:

    2. Open it:

    3. Check that path (here under, it is under /New Movies):

    4. Edit some info for testing purpose (here under, the “Original Title”):

    5. Change also the poster:

    5. Using Windows Explorer or the File Station, move the folder of the movie into its new location. The movie will appear very soon as unavailable:

    6. Open it:

    7.  Wait… Soon it will become again available:

    8. Check now the path (here under, it is now under /Movies):

    9. As you can see, the chosen cover is still there. And editing the details, you would see that the original title is still “DEMO MOVE FOLDER within Plex”.

    Loading

  • Easily Backup openHab into a Shared Folder

    I am running openHab2 on my raspberry pi 4 using openhabian and I wanted to schedule a backup of my config into a shared folder of my Synology.

    Backup as root

    Openhabian’s backup solution ‘Amanda’ didn’t convince me to automate such a backup. So I wrote my own backup script and scheduled it with a cron job.

    First, create a shared folder named “backups” on your Synology via Control Panel > Shared Folder > Create:

    Next, create a user account named “backup” via Control Panel > User > Create:

    Grant that account write access on the shared folder “backups” via Control Panel > User > backup > Edit > Read/Write

    The Synology part being ready, move now to openhabian, on the RPI, using a ssh console (E.g.: Putty) to create and schedule the backup script. Most parts will have to be done as ‘root’ (to make it simpler… but no safer), so type;

    sudo -i

    If just like me you previously tried to configure the backup with Amanda, using the command “sudo openhabian-config” and then the menu 50 > 52, a mailer daemon (exim4) was probably installed and you want now to remove it… Check if it’s running (not with the command “systemctl –type=service –state=running” but) with:

    sudo service --status-all

    If you see a + in front of exim4, disable and remove it

    sudo systemctl stop exim4
    sudo systemctl disable exim4
    sudo apt-get remove exim4 exim4-base exim4-config exim4-daemon-light
    sudo rm -r /var/log/exim4/

    You can also next uninstall and remove Amanda

    sudo apt remove amanda-client
    sudo apt remove amanda-server
    sudo rm -r /var/lib/amanda/
    sudo rm -r /etc/amanda

    Now, we can start with the preparation of the backup script. Define first a mount point on your RPI. E.g.: “/mnt/backups”:

    mkdir /mnt/backups

    Define next the shared folder of your Synology by editing the fstab file:

    sudo nano /etc/fstab 

    Add the line here under in that file and save your change with CTRL-o, Enter, CTRL-x:

    //<ip of your Synology>/backups /mnt/backups cifs username=backup,password=<password>,uid=1000,gid=1000,vers=3.0 0 0

    Attention: the network is usually not yet available when the fstab file is used to mount the drives a boot time. So this shared folder will most probably not be mounted automatically!

    Create a file:

    nano /home/openhabian/maintenance.sh

    with the backup script here under:

    #!/bin/sh
    # Backup Openhab of Synology
    log="/var/log/maintenance.log"

    echo $(date) "Run openhab maintenance: $0" >> $log

    if mountpoint -q /mnt/backups
    then
    echo $(date) "Synology's backups share is mounted." >> $log
    else
    echo $(date) "Synology's backups share is not mounted. Try to mount as per fstab definition." >> $log
    sudo mount /mnt/backups
    sleep 3
    if mountpoint -q /mnt/backups
    then
    echo $(date) "Synology's backups share is now successfully mounted." >> $log
    else
    echo $(date) "Synology's backups share cannot be mounted." >> $log
    fi
    fi

    if mountpoint -q /mnt/backups
    then
    # Keep the 10 last backups
    rm -f $(ls -1t /mnt/backups/Raspberry/openhab2-backup-* | tail -n +11)
    cd $OPENHAB_HOME
    sudo ./runtime/bin/backup /mnt/backups/Raspberry/openhab2-backup-"$(date +"%Y_%m_%d_%I_%M").zip" >> $log
    echo $(date) "custom backups of openhab completed." >> $log
    echo "-----------------------------------------------------------------" >> $log
    fi
     

    Make that script executable (for all users…)

    sudo chmod a+x maintenance.sh

    To run that script as root on a regular basis, you have to schedule it as root (using now sudo explicitly if you didn’t type sudo -i earlier) via crontab:

    sudo crontab -e

    If it’s the first time you run crontab, you will have to pick your prefered editor. I advice nano 😉

    Select an editor. To change later, run 'select-editor'.
    1. /bin/nano <---- easiest
    2. /usr/bin/vim.basic
    3. /usr/bin/mcedit
    4. /usr/bin/vim.tiny
    5. /bin/ed

    Choose 1-5 [1]: 1

    In crontab, add this at the end and ave that change with CTRL-o, Enter, CTRL-x:

    0 1 * * * /home/openhabian/maintenance.sh

    Notice: if you want to mount the shared drives at boot, which usually fails as mentioned previously as the network is not yet available when fstab is first called, you can add this in the crontab too:

    @reboot sleep 300; mount -a

    You can now try the script with:

    sh /home/openhabian/maintenance.sh

    If it works, it should also work when triggered by the cron job.

    Backup as openhabian

    To run scripts as root is usually not recommended. But the backup script of openhab may only be run as root… We could run it with the account ‘openhab’, but the backup files will belongs to the user ‘openhabian’, making the cleanup tricky. I you really don’t want to run and schedule my script as root, then the best option is to run it with the account “openhabian”:

    Still being is root mode (sudo -i), create the log file manually and grant access for all users:

    touch /var/log/maintenance.log
    chmod a+rw /var/log/maintenance.log

     

    Authorize the user “openhabian” to execute the backup script “/usr/share/openhab2/runtime/bin/backup”. To do this, you have to create a file in the /etc/sudoers.d folder. All files in that folder are used by the “sudo” command to authorize configured users to execute specified commands as root, with or without password. You MUST ABSOLUTELY edit that file with the command “visudo“. This one will check that your changes are valid. If you edit that file with another editor and it contains an error, you won’t be able to use the “sudo” command anymore (you will have to plug the SD card into a USB adapter on another raspberry to fix the issue or to simply delete the invalid file. USB device are automatically mounted under /media/usbxxx if you installed the package usbmount).

    visudo /etc/sudoers.d/openhab

    In that file, add the line here under and save your change with CTRL-o, enter, CTRL-x

    # Allow openhabian user to execute the backup script
    openhabian ALL=(ALL) NOPASSWD: /bin/mount, /usr/share/openhab2/runtime/bin/backup

     

    Unschedule the script from root’s crontab (remove the line added with crontab -e)

    crontab -e
    0 1 * * * /home/openhabian/maintenance.sh

     

    And schedule it now within openhab’s crontab (has to be done as ‘openhabian’  user)

    sudo -u openhabian crontab -e

    And add

    0 1 * * * /home/openhabian/maintenance.sh

     

    Et voilà.

     

    PS.: If you experience issues when mounting the remote shared folder, try to mount it interactively (using an administration account of  your Synology or an account having a password without symbols such as %, # or !)

    apt install smbclient 
    smbclient //<remote ip>/<shared folder> -U <user account>

    You can also check the latest messages from the kernel

    dmesg | tail -n10

    Loading

  • Easy replacement or upgrade of HTC Vive Headset’s lenses

    I did scratch the lenses of my HTC Vive Pro while keeping my glasses to play. I couldn’t find original replacements lenses but realize that I could simply use the lenses of my Samsung GearVR instead. And the result is amazing !

    Click to Read More

    As you know if you googled for information about the lenses of the HTC Vive, those are “Fresnel lenses“. I.e.: a set of flat surfaces with different angles. Many people suffers of “God Ray” artifacts with those lenses. I personally found indeed the visual experience as quite poor for such a high-end headset.

    The good news is that it’s very easy to replace those lenses. And it can be done:

    1. Not only with cheap 3D printed adaptors to reuse the lenses of a old Samsung GearVR.
    2. But also with prescription lenses adapted to your eyes if required.

    First, a lot of information can be found on Google with the keywords : HTC Vive GearVR lens Mod.

    If you need prescription lenses, simple google for HTC Vive prescription lenses. There are plenty of serious sites and they provide both the lenses and the adaptors:

    If you want to reuse the lenses of an old Samsung GearVR:

    • You need to print your own adaptors. Plans can be found on Thingiverse. Ex.: here or here.
    • Or you can buy them on internet. There are plenty on eBay (with or without lenses). Search for HTC Vive Mod Upgrade Kit
    • The adapters are the same for the HTC Vive, HTC Vive Pro and HTC Cosmos also.
    • The Samsung GearVR lenses MAY NOT BE those from the 2015 version (the white model). Those are smaller. You need a dark blue version of 2016 or 2017!

    There are many How-to on the web, such as this video which is quite complete for HTC Vive or this one for HTC Cosmos.

    Notice next that opposite to what is illustrated on some video, I had no need to unscrew anything to pop out the lenses. Neither on the GearVR, nor on the HTC Vive.

    The result is really fairly dramatic!

    Et voilà.

    Loading

  • Plugin “Move WordPress Comment” not working anymore

    I just noticed that this really great plugin “Move WordPress Comment” was failing when trying to move comments. Fortunately, the fix was easy.

    Click to Read More

    This plugin is very useful when you started a discussion thread (I.e.: you reply on a comment), but the person does not answer on the last comment. Instead, he starts a new comment. In such a case, the plugin can be used to move his last comment under the last reply in the discussion thread;

    Example, here under, I could attach the second discussion thread under the last reply of the first discussion thread by typing the id #46705 of that last reply into the “parent comment” of the first comment #46719 of the second discussion thread and clicking “Move”.

    Unfortunately, this plugin is now returning an error “Uncaught ArgumentCountError: Too few arguments to function wpdb::prepare()”

    The fix is really simple. Go to your WordPress Dashboard, under the menu “Plugins” and select the “Plugin Editor”.

    Next, in the top-right corner, set “select plugin to edit” = “Move WordPress Comment” and click “Select”.

    Then, go to line 63 or search for “prepare”. This methods requires 2 parameters. So, in the where clauses of the SQL Update statements, replace the variables by %s and move the variables into a second parameters.

    It should result into this:

    // move to different post
    if ( $postID != $oldpostID ) {
    $wpdb->query( $wpdb->prepare( "UPDATE $wpdb->comments SET comment_post_ID=$postID WHERE comment_ID=%s;", "$commentID") );

    // Change post count
    $wpdb->query( $wpdb->prepare( "UPDATE $wpdb->posts SET comment_count=comment_count+1 WHERE ID=%s", "$postID" ) );
    $wpdb->query( $wpdb->prepare( "UPDATE $wpdb->posts SET comment_count=comment_count-1 WHERE ID=%s", "$oldpostID" ) );
    }

    // move to different parent
    if ( $parentID != $commentID ) {
    $wpdb->query( $wpdb->prepare( "UPDATE $wpdb->comments SET comment_parent=$parentID WHERE comment_ID=%s;", "$commentID" ) );
    }

    Finally, click on “Update File”, at the bottom of the Plugin Editor.

    Et voilà,

    Loading

  • Flash LSI 9211-8i and LSI 9201-16i on Asus P9X79 WS

    I did move my LSI cards 9211-8i and 9201-16i from my Asus Striker II Formula onto a P9X79 WS. Unfortunately, the MB couldn’t boot with both card plugs. I did upgrade the firmware and erased the bios of both cards to solve the problem.

    Click to Read More

    Actually, everything was working fine until I decided to upgrade the bios of my Asus P9X79 WS from the version 48022 of 2015 to the very latest 4901 of 2018 (An attempt to solve an issue with DIMM not recognized.. Attempt failed… I think the issue is actually with the CPU not supporting Quad Channel). Bref.

    After this upgrade, the Asus MB didn’t boot anymore. There was only a cursor blinking in the upper left corner. Clearly, IMO, the cursor of the LSI bios.

    • Booting without any LSI card was ok.
    • Booting with only the LSI 9211-8i was ok.
    • Booting with only the LSI 9201-16i was ok.
    • But booting with both LSI was failing although I tried to move them in all the various PCI-E x16 slots.

    So, I decided to attempt the upgrade of both LSI cards (currently with firmware IT version 19.00.00.00 and Bios 07.37.00.00). But while preparing the USB key to do this, reading a few articles, I realized that I could also completely erase the BIOS as I don’t boot from a disk on any of those Cards.

    So, here is what I did:

    1. Format a USB key as FAT32
    2. Download the legacy firmwares (and bios if you want) for the LSI cards from here (with Product Group = Legacy Product and Product Family = Legacy Host Bus Adapters)
      1. for 9201-16i, it’s here. You need two files
        1. 9201-16i_Package_Pxx_IT_Firmware_BIOS_for_MSDOS_Windows.zip 
        2. Installer_PXX_for_UEFI
      2. for 9211-8i, it’s here. You need two files:
        1. 9211-8i_Package_Pxx_IR_IT_FW_BIOS_for_MSDOS_Windows
        2. Installer_PXX_for_UEFI (which is actually the same file as for 9201-16i)
    3. From the Installer_PXX_for_UEFI, extract the file /sas2flash_efi_ebc_rel/sas2flash.efi and copy it on the USB Key root
    4. From the xxx_BIOS_for_MSDOS_Windows, extract
      1. the firmwares /Firmware/HBA_92xx_IT/xxx.bin and copy them in the USB Key root:
        1. 9201-16i_it.bin
        2. 2118it.bin
      2. the bios sasbios_rel/mptsas2.rom (it’s the same bios for both cards) and copy it in the USB Key root
    5. Now, you need a UEFI shell which works with the P9X79. This one for example or the v2 to be found here didn’t work for me. Instead, I was lucky with this one:
      1. download this Shell_Full.efi on your USB Key root
      2. and rename it Shellx64.efi
      3. for some users, they have to copy if from the root into sublfolders like:
        1. /efi/boot/Shellx64.efi
        2. /boot/efi/Shellx64.efi
    6. Plug the USB Key into your Asus P9X79 WS. For safety, remove all other USB keys and disconnect all HDD/SSD.
    7. Plug next only one SLI Card and upgrade it. Once done, remove it,n plug the other card and upgrade it too.

    To upgrade one SLI Card with your Asus P9X79 WS:

    1. Boot the PC and go into the Bios with Del or F2.
    2. Go into the “Advanced mode” (F7 or click on the button bottom left)
    3. Click on the “Boot” menu (upper right)  and scroll down until you see “Secure Boot” (screenshot 1)
    4. Enter into “Secure Boot” and select “OS Type” = “Other OS” (screenshot 2). This is required to be able to use the UEFI Shell later
    5. Press F10 to “Save and Exit” (screenshot 3)
    6. Go back into the Bios with Del or F2.
    7. Go into the “Advanced mode” (F7 or click on the button bottom left)
    8. Click on the “Exit” menu (upper right)
    9. Select “Launch EFI Shell from filesystem device” (screenshot 4)
      1. if you have only the USB key prepared previously plugged into the PC, you should soon see the EFI Shell loading
    10. Once the EFI Shell loaded, move to the USB Key and list its content with the commands fs0: and next ls. If you have several USB keys, you need to move onto the right one. You can use the command: map -b to list all available disks and identify the correct one. Ex.: if the correct one is “fs1 :Removable HardDisk – … USB(…)”, then you can move onto it with fs1:. You can break stop the map command by typing q.
    11. Now check the versions of your SLI card with sas2flash -list
    12. Remove the Bios with the erase command and the parameters 5 as documented in SAS2Flash_ReferenceGuide.pdf (screenshot 5) : sas2flash -o -e 5
    13. Possibly also erase the controller flash memory (I hadn’t to do it) with the command: sas2flash.efi -o -e 6
    14. Type again sas2flash -list to verify the Bios Version now reads N/A
    15. To load a new firmware, depending on the LSI card, type :
      1. sas2flash.efi -o -f 9201-16i_it.bin
      2. sas2flash.efi -o -f 2118it.bin
    16. To load the bios, type: sas2flash.efi -o -b mptsas2.rom

    Do this for each SLI card, one at a time.

     

    Documentation:

     

    Screenshot 1: Secure Boot

    Screenshot 2: Other OS

    Screenshot 3: Save and Exit

    Screenshot 4: Launch EFI Shell

    Screenshot 5: Erase command for sas2flash

    Loading

  • How to remote access MySQL on openHabian (RPI 4)

    I wanted to use phpMyAdmin on a Synology to access a MySQL running on a RPI with openHabian. Here is my how-to:

    Click to Read More

    First connect on your openHabian using a ssh console.

    Obviously, you need MySQL to be installed and configured:

    sudo apt update

    sudo apt upgrade

    sudo apt install mariadb-server

    sudo mysql_secure_installation

    Then, double check that MySQL is running and listening on port 3306
    netstat -plantu | grep 3306

    If nothing is displayed by this command, MySQL is not listening on the port 3306.

    Enter MySQL as root with the command:

    sudo mysql -uroot -p

    Check the port used by MySQL

    SHOW GLOBAL VARIABLES LIKE ‘PORT’;

    Then, type the following MySQL commands to create an account and a database, and grant both local and remote access for this account on the database:

    CREATE USER ‘<YourAccount>’@’localhost’ IDENTIFIED BY ‘<YourPassword>’;

    CREATE DATABASE <YourDatabase>;

    GRANT ALL PRIVILEGES ON <YourDatabase>.* TO ‘<YourAccount>’@’localhost’;

    GRANT ALL PRIVILEGES ON *.* to ‘<YourAccount>’@’169.254.0.%’ identified by ‘<YourAccountPassword>’ WITH GRANT OPTION;

    FLUSH PRIVILEGES;

    Here above, I do grant access from all machines in my local network with ‘169.254.0.%’. One can restrict access to one machine with its specific address, such as : ‘169.254.0.200’

    Now, edit 50-server.cnf and configure MySQL to not listen anymore on its local IP only (simply comment the line bind-address) :

    sudo nano /etc/mysql/mariadb.conf.d/50-server.cnf

    # Instead of skip-networking the default is now to listen only on
    # localhost which is more compatible and is not less secure.
    #bind-address = 127.0.0.1

    Finally, restart MySQL for the changes above to be applied:

    sudo service mysqld restart

    You can now edit the config of phpMyAdmin to access the MySQL on your RPI. If it is running on Synology, look here.

    Loading