Month: January 2013

  • Remote connection to Server 2012 Essentials

    I didn’t succeed to connect to my Server 2012 Essentials through Remote Desktop until I remembered me that this one was set up by default as a domain controller, one of the major differences between this server and WHS 2011.

    Click to Read More

    First, check that Remote Desktop is enabled on the server, just to be sure:

    1. On the Start Screen, type SystemPropertiesRemote and run that tool
    2. Allow remote connection to this computer.

    Next, I have disabled the Firewall, I don’t need to allow Remote Desktop Connection as an exception in this firewall.

    Finally, Windows Server 2012 Essentials being a domain controller, accounts created on that machine belongs to the domain defined at the end of the installation. So, the user name to be provided in Remote Desktop must be like <domain name>\<user name> instead of simply <user name> (equivalent to <remote machine name>\<account> as by default, Remote Desktop uses the target machine name as domain name).

    This subtle difference between Server 2012 Essentials and WHS 2011 (which was not a domain controller) could be a source of confusions while trying to connect for “end-users” used to work with workgroups instead of domains.

    Loading

  • Zeus is dead. Long life Zeus!

    As mentioned in a previous post, I was planning to build a new home server (for File Storage and Virtual Machines) to replace my previous one named Zeus. Reason: this one was experiencing more and more sudden reboots/crashes and I was afraid that the motherboard could be end-of-life.

    Zeus
    Zeus

    Click to Read More

    Why Zeus was experiencing crashes was not as important as the possible consequences of a definitive hardware failure. I used to stores on it thousands of personal photos and videos, as well as all ripped CD/DVD, softwares, PC backups, many VMware VM, etc.. And I was using a RAID 5 based on onboard controller.

    1. In case of hardware failure, I would have been unable to access my data and had to find a new motherboard with exact same controller to be able to rebuild the RAID 5.
    2. In addition, although an (not true) onboard “hardware” RAID 5 was offering good access performances in normal conditions, it was deadly slow after a crash as it has to check all the data.
    3. Finally, I was not able to put more than 12 disks in Zeus’ case.

    So, I decided to build a new home server,

    1. with a software RAID to not depend on any hardware. The idea is that I could replace dead pieces, if any, with any other one (possibly not identical chipset, …) and still be able to access the data
    2. with a very large case where I could add new disks whenever required without relying on special internal multi-bays (like the Icy dock’s ones..) or external drives.

    From the hardware point of view (For my needs; home server = not true server = desktop hardware):

    • For the case, I bought a case like the Norco Case RPC 4224 with 24 hot-swappable Sata/Sas III drive bays sold by X-Case. They sell that case in two versions, one for home server (RM424s) and one for servers (RM424). I took the server version as the home one was not available immediately (My brother bought one too… Both cases did a very long delivery trip 🙂 ).
      • 4U rackmount design
      • Support EEB (12″x13″), CEB(12″x10.5″), ATX (12″x9.6″), Micro ATX (9.6″ x 9.6″), Mini-ITX (6.7″ x 6.7″) motherboards
      • 24x hot-swappable Sata 6Gb/s (compatible Sata 3Gb/s, 1.5Gb/s) / SAS drive bays
      • Six internal SFF-8087 Mini SAS connectors support up to twenty-four 3.5″ or 2.5″ Sata 6Gb/s or SAS hard drives, mounted on horizontal backplanes for better ventilation.
      • Hot-swappable HDD tray with special designed power-off and lock mechanism + LED indicators for power and activity on each HDD tray.
      • 3 Ball-bearing cooling fans for better ventilation in the case and 2 cooling fans
      • Two front USB ports
      • Redundant 4pin molex PSU connectors support redundant power supply
      • Screwless top cover
      • Smooth border prevent lacerating your skin

    I decided next to reuse the motherboard, RAM and CPU of my current PC (Chaos) — a good opportunity to buy a new motherboard supporting new processors/RAM/devices to update Chaos

    • an old Asus Striker II Formula with
      • CPU: Intel Socket 775 (for Core 2 Quad/Core 2 Extreme/…)
      • Chipset: NVIDIA nForce 780i SLI
      • Bios: version 2042
      • Front Side Bus: 1333/1066/800MHz
      • RAM: 4 x DIMM DDR2 (Max 8 GB) – Dual Channel Architecture.
      • Storage: 1 x UltraDMA 133/100/66/33, 6 x Sata 3Gb/s ports NVIDIA MediaShield RAID with Raid 0/1/5/10/JBOD.
      • Slots: 2 x PCIe 2.0 x16 (mode dual x16), 1 x PCIe x16 (mode x8), 2 x PCIe x1, 2 x PCI 2.2
      • LAN: 2 x Gigabit LAN
      • Audio: 8 channels HD (SupremeFX II Audio Card ADI 1988B 8 plugged in one of the PCIe x1 slot).
      • IEEE 1394: 2 x 1394a ports (one at back panel, 1 onboard)
      • USB: 10 x USB 2.0 ports (6 at back panel, 1 onboard), 12 x USB 2.0 (6 at back panel, 6 on MB).
    • 1 old CPU Intel Core 2 Quad Q6700 8MB Cache (LGA 775) – 2.66Ghz.
    • 1 old Zalman CNPS9500 LED CPU Cooler
    • 4 old x 1GB Ram DDR2-800 CL5 (5-5-5-15 at 333MHz) Kingston HyperX Blue in Dual Channel mode.
    • 4 new x 2GB Ram DDR2 1066 CL5 (5-5-5-15-2N) G.Skill F2-8500CL5D-4GBPI-B in Dual Channel mode.
    • 2 old x 150 GB HDD Western Digital VelociRaptor (WD1500AHFD) Sata 1.5Gb/s 10.000RPM 16MB Cache (Read 128MB/s, Write 142MB/s) in Raid 0 for the OS (Windows Server 2012 Essentials)
      • They are controlled by 2 onboard Sata controllers
      • They are mounted inside the case, next to the motherboard, but not in one of the 24 bays.
    • 1 old ATI Radeon 9600 256MB 128-bit DDR AGP
    • Disk (named Unit-of-Risk: UoR) will be formatted with NTFS and FlexRAID is going to be used  to create a smart RAID system protecting data with a snapshot model.
      • Disks with data (named Data-Risk-Unit – DRU), once removed, will be readable from any other PC using any other kind of Sata controller.
      • Parity (stored on disks named Parity-Protection-Unit – PPU) will be computed only once a day (Snapshot model), not slowing down the data access (Notice: Real Time model is also supported). Notice: PPU must be as large as the biggest DRU.
      • New DRU can be added at any time: data won’t be erased.
      • Support for multiple parity levels (E.g.: with parity level 3, no data will be lost if maximum 3 disks fails at the same time).
      • So:
        • Failure of one UoR does not affect any other UoR in the array.
        • If you lose more UoR than supported by the parity level, you only lose those “extra” devices. All your remaining devices will be healthy, and the data on them will be fully readable.
    • Disk (named Unit-of-Risk: UoR) will be formatted with NTFS and tRAID is going to be used  to create a smart RAID system protecting data in real time.
      • Disks with data (named Data-Risk-Unit – DRU), once removed, will be readable from any other PC using any other kind of Sata controller.
      • Parity (stored on disks named Parity-Protection-Unit – PPU) will be computed in real time with some impact on the performances but with a high protection level. Notice: PPU must be as large as the biggest DRU.
      • New DRU can be added at any time: data won’t be erased.
      • Support for up to 2 PPU, so data are not lost if maximum 2 disks fails at the same time.
      • Data can still be accessed in real time although up to 2 disks have failed thanks to a live reconstruction feature.
      • So:
        • Failure of one UoR does not affect any other UoR in the array.
        • If you lose more UoR than supported by the parity level, you only lose those “extra” devices. All your remaining devices will be healthy, and the data on them will be fully readable.

    Here above “old” only means “re-used from the previous lifes” of Zeus or Chaos…

    Loading

  • nVidia nForce 780i SLI – Asus Striker 2 Formula – one Ethernet port disappeared

    I have detected a new issue this evening – with the new machine I am building on top of my Asus Striker II Formula – while trying to configure NIC Teaming in Windows Server 2012: one NIC appeared to be completely “off”.

    Click to Read More

    One NIC (Network Interface Card – there are two on this motherboard) disappeared from the Windows’ devices list and was therefore not available anymore to be used for NIC Teaming… Looking at the back of the network card, it appeared to be “off” (None of the LED lights did light up).

    NIC Status: There are two indicator LEDs on a typical NIC. A single lit green LED indicates the computer is connected to the network. This is called the “link” light. The second LED is amber in color. A flashing amber LED indicates message packet collisions are occurring. Occasional collisions are normal on a busy network, but a frequently lit amber LED is an indication of problems. A quickly flashing link LED (green) is a network activity indicator, meaning that communication is occurring. If the green link light is off, and the amber LED is blinking, then the NIC is in “power save” mode.

    While I am not really sure about the change I made that is responsible for this issue, I suspect that it is a side effect of playing around to enable WOL and Hibernation on the machine. At least, similar issues have been reported on the web for this motherboard/NVidia Chipset when using the Sleep mode…

    There was nothing to do in Windows to retrieve that NIC (I did uninstall and re-install the drivers, more recent drivers, … did reboot several times… and the NIC was not simply disabled in Windows).

    While restarting once more the machine, I noticed that lights of the disabled NIC kept off during the reboot. Si, I did reboot the machine after a complete power off (> 1 minute). Then, the LED started to blink… But once Windows loaded, they went back off.

    I finally succeeded to recover the NIC pressing the “Clear MOS” button on the back of the motherboard.  This is resetting the BIOS to its default settings (In my case, I have to re-enable the RAID support on the onboard Sata Controller as well as “Power On By PCI/PCIE Devices” to support WOL).

    Re-enabling WOL in the bios didn’t raise back the issue. Regarding Hibernation, I won’t test/re-enable it as I don’t plan to use it anymore. It’s indeed not recommended when using FlexRaid (I plan to use FlexRaid instead of Windows’ Storage Spaces. And anyway, to support that mode, I should replace my Graphic card by a more recent one.

    NIC Teaming: There are 2 x 1GbE network interfaces available on my Asus Striker 2 Formula, but only one was actively used. The other one was not connected. A single 1GbE (which provides roughly 100Mbytes per second of throughput) is a common bottleneck for a file server, especially when reading data from cache or from many disk spindles (physical disk throughput is the other most common bottleneck). A simple solution for improving performances consist in enabling the multiple NICs and creating a NIC Teaming, also known as “Link Aggregation” or “Load Balancing and Failover (LBFO)”. It’s even really stupid simple to enable it with Server 2012. Well, not that stupid… as the switch could have to be configured adequately.

    Actually, there are two benefits in using a Team of NICs: Performance and Resiliency.

    Resiliency means that if one of the network cards fails, the remaining NICs in the team will continue making sure that traffic is getting through (Personnaly, I don’t car about this advantage).

    Regarding the performance, the team is able to take advantage of the aggregate of all bandwidth available.  So in theory, 2 x 1Gb NICs should be able to give 2Gb of bandwidth.

    So far, I am using the default NIC Teaming configuration: Switch-Independent and Active/Active. To measure the performance I need two clients with one GbE port each. I have unfortunately only one available for now… the other one is put in pieces and waiting for some maintenance :/

    Later, I will see if I can use a Switch-Dependent configuration. As soon as I find time to learn what can be configured on my switch: a Netgear JGS524E – Gigabit ProSafe Plus 24 ports.

     

    Loading