A Solaris box to backup my Netgear ReadyNAS NV

Introduction

This writeup is from October, 2008. Here’s some background info – I store all my stuff in an Infrant (now Netgear) ReadyNAS NV. “Stuff” includes half a terrabyte’s worth of family photos, and quite a lot of work-related information. My home office is Apple-based (a G5 tower and some aging Powerbooks), and the files shared are DICOM images and complex “package” documents from, e.g. Mellel, DevonThink, or Keynote, whose survival across a network depends on preserving their intricate directory structure.

No need to get into detail, but I gradually became quite dependent on my little silver friend — not to mention the very helpful ReadyNAS community. Soon enough, it also became apparent that one must implement a reliable backup strategy. The reasons are obvious: what if the NV breaks or gets stolen? Clearly, it’s not a good thing to concentrate everything in a single device.

Ubuntu Linux is a good option

Up until recently, I had an Ubuntu server delegated to that purpose: nothing fancy, just a large ATX box, with a hefty power supply and a SATA backplane. It was my first DIY “build”, and, being conservative by nature, the actual computer was relatively low-end: a 1.8 Ghz Celeron D, on an Intel 915 GEV motherboard with 1 Gb RAM.

More importantly, the Ubuntu server stored all the AFP-related information found in Apple network environments (AppleDouble and .DS_Store files), which were crucial to my work habits. Plus, as Netatalk is easily installable via apt-get, the box can serve anything in AFP, preserving all Apple-specific metadata. Very sweet!

Ubuntu was fine, and I had a great time customizing the desktop – to the point it was as “lickable” as Mac OS X 🙂 

Managing a Linux RAID (via the cryptic “mdadm” command), however, was beyond my comprehension or my abilities, especially if something went wrong – e.g. the inevitable disk failure.

So, I started looking for alternative solutions. This time, the GUI environment was not an issue, just the mechanical details. My plan was for a headless server, i.e. one without a display, that would be reliable and easy to manage via console – so called “secure-shell” (ssh) – login.

Arguably, an OS implementing ZFS would be ideal, as its reliability and ease of administration have earned high marks, even the home setting. As far as I know, full ZFS implementations exist only in Solaris and FreeBSD distributions; common sense would dictate to try Solaris first, since ZFS was essentially developed by Sun.

In summary, the objective was – to build a server, copy all the NV’s shares over, and then connect the two via a wireless network to provide scheduled backups. This way, I would be able to access all my stuff both at office, as well as at home – which, around Lycabettus, often represent adjacent apartments ;-).

The versatility of AFP-based networking is not available in Solaris, but, that’s OK.

Why not get another NAS box?

One final note – I considered getting a second NAS server (from Netgear, or other manufacturers), but decided against it for a number of reasons. The most important is that they are not represented locally, so one gets really stuck if something goes wrong – and this is no iPod we’re talking about!

Another point is that a ZFS storage “pool” provides ample space for expansion, perhaps for archival storage of digital video, not to mention that it can be configured as high-speed local disks (the term is “iSCSI“) for, say, Time Machine backups. All in all, a very tempting proposition!

Hardware check

As I had opted to use existing components, my job was easy – all I did was check the Solaris OS: Hardware Compatibility Lists (HCL). It came as a pleasant surprise that one can run a full-fledged SunOS server on a lowly Celeron. Optimal performance, however, would require a modern 64-bit CPU (Intel or AMD) – try googling within the Arstechnica site for useful insight.

If I was to start from scratch, I would have picked a Core2Duo-based ATX board with dual network cards, and 6, or more, internal SATA ports.

Just keep in mind that raw computing power, as well as energy conservation (via kernel “speed-stepping”) are at play here, and it seems that low-wattage modern Intel chips with > 4 Gb RAM would be best suited for Solaris server use.

Oh, one more thing. My objective was to use wireless networking to link the Solaris box with the ReadyNAS, so I also had to pick a PCI card for Wifi. That was not an easy task.

I ended up going with an Atheros-based solution, and got the Linksys/Cisco WPN300N (whose European models use the Atheros chip). A separate saga applies to installing and configuring that, though.

Solaris installation – which variant?

A good choice is the “Solaris Express Community Edition” (SXCE), as suggested by Simon Breden. From what I understand, this is essentially a mature OpenSolaris binary distribution, that can incorporate all current driver updates (don’t quote me on this, though). In any case, installation went smoothly, except for the network configuration part.

There’s no point in me guiding you through installation, as I am a true Solaris newbee. I found this “Step-by-step guide to installing Solaris 10“, by Dennis Clarke, very useful, and it seems to apply to both the Community (SXCE) and Developer (SXDE) editions. All I can say is that it took me two weeks (!) – and many “sys-unconfig” commands – to figure out how to set up an internet connection that survived system reboots, and the culprit was the “inetmenu” service.

So, here are some installation-related notes, just in case you bump into similar problems:

Disabling inetmenu

Issuing the following command to disable it, solved my network configuration problems, and it took me weeks to figure this out! See link.

#svcadm disable /network/inetmenu


CIFS/SMB or “SAMBA”? Aren’t they the same?

No, they are different! Solaris comes with SMB (or CIFS) networking built-in. The name SAMBA refers to an open-source implementation of the protocol which, apparently, is not required – and, in fact, can be deleterious when run concurrently with native CIFS, as Simon points out in his ZFS NAS blog.

So – no need to install Samba, then. Just figure out how to setup CIFS/SMB within Solaris – simple;-)

Wireless networking

That’s not entirely relevant to the Solaris NAS topic, but very important for my specific circumstances. I installed the latest Atheros driver (ath) from the OpenSolaris wifi site, and used the dladm utility to configure the connection. Keep in mind that 802.11n is not supported as yet.

The steps were as follows (logged in as root):

solaris# wget http://www.opensolaris.org/os/community/laptop/downloads/ath-0.7.3-pkg.tar.gz --20:55:13--  http://www.opensolaris.org/os/community/laptop/downloads/ath-0.7.3-pkg.tar.gz => `ath-0.7.3-pkg.tar.gz’ Resolving www.opensolaris.org... 72.5.123.5 Connecting to www.opensolaris.org|72.5.123.5|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 361.472 (353K) [application/x-gzip]


Install the ‘nano’ console editor

As I was coming from a Mac background, there was no way I would manage to learn vi, the default editor in Solaris. Installing nano wasn’t hard, and here’s how it’s done:

#pkgadd -d http://www.blastwave.org/pkg_get.pkg #PATH=$PATH:/opt/csw/bin #pkg-get -i nano

The Blastwave site carries a lot of nice software, along with straightforward instructions. Sunfreeware is another site with files to download.

Remote console login

Once the network cards are setup, the best setup for administering the server is via ssh. All the screenshots in this page are from a Powerbook running iTerm in Leopard. So: grab some coffee (or ouzo?) and let’s get started!

Here’s a sample screen illustrating some basic administration commands:

RAIDZ or maybe RAIDZ2?

Our first major dilemma: how much redundancy should we put into the ZFS storage pool? I’m not an IT professional, let alone a ZFS expert, so my two cents worth are as follows.

Four disks are really necessary to get decent capacity as well as adequate safety. In RAIDZ configuration, the four one-terrabyte Samsungs I got provide about 2.6T of storage. To my mind, the added reliability of RAIDZ2, allowing up to two concurrent disk failures for example, is clearly worth the increment in cost (around € 120 locally). So I ended up using five 1-terrabyte disks (4 Samsungs and one Seagate), in RAIDZ2 configuration. The total pool size is identical, at 2.6T (wow!), but, again let me stress that the array can survive two disks failing at once. 

Create a disk “pool”

Let’s create a pool. First, we’ll examine the disks that are available with the “format” command – don’t be afraid: pressing control-C gets you out.

The four Samsungs are attached to a 4-port PCI SATA card that emulates a generic Silicon Image chipset (IIRC it’s 3124). The one disk standing out, is attached to the motherboard SATA port. Your console screen will obviously be different.

Solaris% su - <enter root password> solaris# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c1t0d0 <ATA-SAMSUNG HD103UJ-1112-931.51GB> /pci@0,0/pci8086,244e@1e/pci1095,7124@2/disk@0,0 1. c1t1d0 <ATA-SAMSUNG HD103UJ-1112-931.51GB> /pci@0,0/pci8086,244e@1e/pci1095,7124@2/disk@1,0 2. c1t2d0 <ATA-SAMSUNG HD103UJ-1112-931.51GB> /pci@0,0/pci8086,244e@1e/pci1095,7124@2/disk@2,0 3. c1t3d0 <ATA-ST31000340AS-SD15-931.51GB> /pci@0,0/pci8086,244e@1e/pci1095,7124@2/disk@3,0 4. c2d0 <DEFAULT cyl 9726 alt 2 hd 255 sec 63> /pci@0,0/pci-ide@1f,1/ide@0/cmdk@0,0 5. c3d1 <ST310003- 3QJ06YT-0001-931.51GB> /pci@0,0/pci-ide@1f,2/ide@0/cmdk@1,0 Specify disk (enter its number): ^C solaris# zpool create opel raidz2 c1t0d0 c1t1d0 c1t2d0 c1t3d0 c3d1 invalid vdev specification use ‘-f’ to override the following errors: /dev/dsk/c3d1s0 contains a pcfs filesystem.


Oops! The Seagate used to hold my photo collection, and had been formatted in Apple HFS. Let’s override the warning using the “-f” flag.

solaris# zpool create -f opel raidz2 c1t0d0 c1t1d0 c1t2d0 c1t3d0 c3d1
solaris# zpool status pool: opel state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM opel ONLINE 0 0 0 raidz2 ONLINE 0 0 0 c1t0d0 ONLINE 0 0 0 c1t1d0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE 0 0 0 c3d1 ONLINE 0 0 0 errors: No known data errors solaris# zfs list NAME USED AVAIL REFER MOUNTPOINT opel 107K 2,66T 32,2K /opel solaris# 

Ok. The “zpool” is ready, and is mounted in the /root directory with its name, i.e. “/opel”. Time to create some filesystems, corresponding the the ReadyNAS shares.

solaris# zfs create opel/devo solaris# zfs create opel/leonardo solaris# zfs create opel/opus solaris# zfs create opel/super8 solaris# zfs create opel/testing

Define the sharing setup: NFS and SMB

Here comes some tricky stuff.

Out of the box, Solaris can serve its ZFS filesystems as both SMB/CIFS and NFS shares. It is important to keep in mind, however, that our primary purpose is to provide full backup of a ReadyNAS containing AFP shares.

From this perspective, the optimal protocol is NFS, as it allows copying of all the funny file names (.AppleDouble and .DS_Store) beginning with a period, as well as maintain modification times properly – this I have found out by trial and error. CIFS does not behave properly in this context, but presents the data fine to the Solaris box’s clients.

When it comes to NFS file sharing, an important concept is that of “root squashing”. I have a very limited understanding of what that really means, but the general idea is that the ReadyNAS must be able to “take over” the share, and transfer data as “root”, in order for the backup to get through.

Armed with this -rudimentary- background information, let’s jump into action. Remember – we should be logged in as “root” to our Solaris box, and have an administrative user named “yourusername” already set up, via the Gnome GUI utilities.

Also, write down the network domains of both the Solaris box, as well as the ReadyNAS. For example, in my case, the numbers are as follows:

Netgear ReadyNAS <in my office> – its router is 192.168.1.1, i.e. the network is 192.168.1.0/24

Solaris box <in my apartment> – its router is 192.168.2.1, i.e the network is 192.168.2.0/24

First, let’s navigate to the ZFS pool, mounted in the /root directory (its name is “opel”, as in the car maker):

#cd /opel #chown yourusername:staff * solaris# zfs set sharenfs=rw=@192.168.1.0/24,ro=@192.168.2.0/24,root=@192.168.1.0/24 opel


This command enables NFS sharing with full read/write privileges to clients in the 192.168.1.0/24 domain (i.e. the ReadyNAS), allows root access to the same domain, and provides read-only access to local NFS clients (we don’t really care about those, as NFS does not really play nice with Macs, from what I hear).

In case you wonder how the ReadyNAS will access the Solaris box from a remote domain, this will be done wirelessly. The same principle applies to a second network card, or to the simplest of setups, involving a single domain.

Now, let’s take care of SMB sharing, too:

#zfs set sharesmb=on opel


This wraps it up. One must obviously take care of some SMB-related housekeeping chores, but you’re better off checking Simon’s blog for instructions.

Now we’re ready to transfer stuff over from the ReadyNAS. Let’s see how it goes…

Creating a backup job in ReadyNAS’s web console

The Solaris disks are spinning, ready to become backup destinations for ReadyNAS’s Frontview. The approach is as follows:

First, we have to create a backup job for each ReadyNAS share. Its destination is the NFS share by the same name on the Solaris box (keep an eye to proper syntax!). User names and passwords are not necessary at this point.

The important thing to note, at this point, is the address of the NFS Solaris share (and, again, the syntax). An IP of “192.168.1.101” is within the range that enjoys “root squashing” privileges, as set by the “zfs set sharenfs” command over at the Solaris box. In my case, it corresponds to the static IP of the LinkSys wifi PCI card.

Please note that for the initial backup I chose to connect the Solaris server directly to the ReadyNAS’s router (an Airport Extreme) via Gigabit Ethernet, as it would be totally impractical to transfer the shares over wifi. Subsequent backups should be OK, though…

We’re ready to press the “Go” button, and start the backup. Do it (it should work :-).



<Tέλος και τω θεώ δόξα!>

October 30, 2008

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: