Quantcast
Channel: Antarctica Starts Here.
Viewing all articles
Browse latest Browse all 210

Accelerating a RAID-5 array with a solid-state hard drive.

$
0
0

A couple of weeks ago, one of my co-workers mentioned in passing that he'd surprised himself by adding an SSD (solid state drive) to his file server at home.  To recap a bit, Leandra, my primary server at home has a sizable RAID-5 array storing all of my data.  However, one of the tradeoffs is that stuff recently written to the array is a little slow to be read back.  It's really not noticeable unless you're logged in and running commands, and even then the lag is something like one or two seconds.  Noticeable but not actually problematic.  At any rate, I'd been wanting to do some tinkering lately and had an Amazon order planned because I wanted to do some electronic work on my warwalking rig so I figured that, depending on the cost, I might add an SDD to my order.  Much to my surprise, a 120 gigabyte SSD is incredibly cheap, I paid a hair under $20us for a Kingston A400.  Emminently affordable.

Installing the SSD was surprisingly trivial.  Modulo a detour to count the number of SATA ports free on Leandra's mainboard (because I'd forgotten) and figuring out how long a SATA cable I'd need (for the record, a 1 meter SATA cable was more than enough to reach from port-to-SSD given the size of Leandra's chassis) mounting the SSD in Leandra's old floppy drive bay was a ten minute job, mostly because I took some extra time to make sure the cables stayed neat.

I'd done some research on the best way to go about adding a cache drive to an LVM, but ultimately I wound up using the instructions in the Arch Linux Wiki because my own research just didn't work.  Sometimes that happens.

A little bit of context, because it can be confusing (and there are one or two gotchas inherent in the procedure): First, the logical volume is built on top of the RAID-5 array, so forget about the fact that there's a RAID.  It doesn't matter, and you won't be messing with it.  Second, when you add a cache drive to a logical volume, you're not adding it to the volume group (the meta-pretend-big-ass virtual hard drive), you're adding the cache to a logical volume (a virtual, pretend partition of that meta-pretend-big-ass virtual hard drive).  So, I couldn't add it to the entire volume group, I had to add it to one logical volume, /home (or /dev/mapper/leandra-home).  All things being equal, the only thing I was really worried about speeding up was /home because that's where my home directory with all my stuff is.  In theory I could have "partitioned" the SSD so that there would be some cache space for each logical volume, but I wound up dropping that idea because what worked wasn't really feasible.

The SSD (henceforth referred to with the device file /dev/sdf) was a brand-new drive so I had to partition it first.  I created one big partition because I was going to add it to the volume group as a new device (and LVM doesn't care about the sizes of the devices (or physical volumes) that comprise it).

{19:14:36 @ Sun May 19} [drwho @ leandra:() ~]$ sudo fdisk /dev/sdf n p 1 <hit enter a few times to accept the default values> w

Create a physical volume on the SSD:

{19:20:01 @ Sun May 19} [drwho @ leandra:() ~]$ sudo pvcreate /dev/sdf1

Add the physical volume to existing volume group (named leandra):

{19:20:01 @ Sun May 19} [drwho @ leandra:() ~]$ sudo vgextend leandra /dev/sdf1

Now, here's the bit that took some trial and error: Create the cache and add it to the /home logical volume all in one go.  It seems scary but it really does work:

{19:20:01 @ Sun May 19} [drwho @ leandra:() ~]$ sudo lvcreate --type cache --cachemode writethrough -L 100G -n cachepool leandra/home /dev/sdf1

Note that the name of the logical volume is specified as "volume group/logical volume", here leandra/home.  Also note that I told the lvcreate utility which disk device to use as the cache (/dev/sdf1).  The output of the lvcreate command looked like this:

Using 128.00 KiB chunk size instead of default 64.00 KiB, so cache pool has less than 1000000 chunks. Logical volume leandra/home is now cached.

Last and certainly not least, let's ask the lvs utility what it sees.  As you can see in the output, the "cachepool" entry reflects what the solid-state cache was called when it was built (-n cachepool), the value of "Data%" is how much of the cache is used, and the value of "Meta%" is how much of the cache metadata (data that describes the cached data):

{19:32:02 @ Sun May 19} [drwho @ leandra:() ~]$ sudo lvs LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert home leandra Cwi-aoC--- 11.05t [cachepool] [home_corig] 12.11 24.10 0.00 opt leandra -wi-ao---- 1.00t srv leandra -wi-ao---- 1.00t var leandra -wi-ao---- 1.50t

If you look at the value of "Attr" in the above output it's different from the other three logical volumes (-'d entries omitted because they're null):

  • C - Cached volume
  • w - Volume is mounted read/write
  • i - Data allocation policy is inherited from elsewhere (i.e., the default)
  • a - Active
  • o - Open
  • C - Target type: Cache

End result: My home directory on Leandra is now running faster than greased teflon.  Not bad for maybe $30us worth of parts off of Amazon.


Viewing all articles
Browse latest Browse all 210

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>