Howto: Boot Linux with a USB Flash Drive Raid as the Root FS
Recently, we did some speed tests on running USB flash drives in RAID0 (see 6x USB Flash Drive Raid). The results of the tests were promising enough to warrant booting from the raid and using it as a root filesystem. The problem is that booting from USB is a little hard, booting from raid is harder, but booting from USB raid is harder still. Googling around showed no results for any poor souls dumb enough to have tried, although we know you are out there.
We must work around our flash array’s three main flaws:
Total disk space is limited (12GB in our raid) Write speeds are not as fast as a typical modern hard drive write. In particular this causes fsync to be slow on most filesystems. The bios ordering and linux device letters of the flash drives change at random (when we physically move them around). Our solution is then to build a hybrid flashraid/harddrive system. The hard drive located on either IDE or SATA. Loading the kernel from the hard drive is more reliable because the bios drive ordering does not change for sata/ide hard disks as it does for USB drives, this means the hard drive will contain the /boot partition. Our flash raid is extremely fast for reads and random access, so we will use it for the root partition to minimize the time required to launch applications. For a typical user the most writing occurs in the /home partition, this is also where most of the user’s fsyncs occur. The /home partition is also where users store most of their files, which requires the most disk space. Altogether this means it is only natural to use the hard drive for the /home partition.
Prerequisites:
You will need a cdrom drive and a linux rescue cd / live cd. Our favorite is SystemRescueCD. You already have a working installation of your distro. Our weapon of choice is Gentoo, this is convenient because SysRescueCD is also based on Gentoo. Any distro that lets you build your own kernel should work. The good thing about Gentoo is that they expect you to build your own kernel anyway so we wont be going out of our way to select specific kernel options. Your installations has seperate partitions for /, /boot, and /home. / is on device /dev/OLDROOT /boot is on device /dev/BOOT /home is on device /dev/HOME Experience with linux, enough to know what you are doing if something goes wrong. This guide isn’t for the faint of heart or the noob of linux.
Step 1: Prepare your kernel
To be able to boot the raid without an initramfs we need to build the raid capabilities into the kernel. See your distro’s appropriate guide for manually building a kernel. Include at least the following options.
Build the kernel and modules and install them according to your distro.
Step 2: Boot the livecd.
For the SystemRescueCD, we like to use the boot options rescue64 docache dostartx. Rescue64 is only required if your installed distro is a 64bit system.
Step 3: Plug in the USB devices
This seems obvious, but in fact there are a few things to pay attention to. The most important thing for performance is that you split the usb devices across all your USB host controllers. Most motherboards have 2 host controllers, its also possible to purchase add on cards, but the performance of such is unknown.
Figure out what ports map to which host controller, use lsusb to figure it out.
It should result in something that looks like this:
Even though the benchmarks showed that the order the raid devices are added to raid yielded no performance difference, you may wish to interleave the devices just in case your hardware is different. Also keep in mind that the controllers acting as a USB1 slow legacy bus often show up as different bus numbers. Under no circumstances should you force your usb devices onto a non-highspeed bus.
Step 4: Partition each flash device
In order for the kernel to assemble the raid at boot time the flash devices must be partitioned and each partition has to be partitioned with the type “linux raid autodetect”, which is tag 0xFD.
Use your favorite partitioning program (gparted, cfdisk, sfdisk or fdisk) to partition each flash stick. For each device create a single primary partition with the autodetect tag.
Step 5: Create the md device
We want to use superblock version 0.9 because it is currently the only superblock type that can be auto-assembled at boot time. If you do not choose a chunk size it currently defaults to 64k. Keep in mind that the benchmarks in our other article were focused purely on large data transfers; larger block sizes may have a detrimental affect on access times. For this demo we are setting the raid level to RAID0, choose whatever level suits you. Be careful when you list the devices, mdadm will usually ask for verification if something is amiss, but not always. Here we are assuming your usb devices start at /dev/sdb1 and are in order sequentially.
Step 6: Create the filesystem
The file system affects the performance of the drive quite a bit, in our benchmarks we saw that reiser4 performs exceptionally well. The problem with reiser4 is that fsync’s on a single file cause the entire pipeline to flush before returning, this causes some programs (pidgin, emacs) to perform unacceptably on a flash raid. Reiser4 also requires a special kernel patch, so it is not recommended.
The default filesystem for linux is currently ext3, so we will use that for this demonstration.
In our flash raid, we have 6 devices that means the ext3 ‘stride’ size is numDevice * chunkSize/blockSize. Nowadays, everyone uses a blocksize of 4k because it is the best performing. For us that means our stride is 6 * 64k/4k = 96.
For very best performance, we will mount the filesystem with data order mode of writeback. This allows the file system to service commands out of order. So, lets set the mode as the default.
Step 7: Copy the root directory from your hard drive to the flash raid.
Now we copy all the files over to the flash devices.
Step 8: Update your fstab
Not only do we need to update the root device parameter but we also want to include the data=writeback flag as well as the noatime flag for maximum performance. Open /etc/fstab (you probably want to use nano, but the livecd also has qemacs and vi). Leave your /home and /boot lines alone, only edit the root fs line.
to
Step 9: Update your bootloader
Assuming you use grub, follow these steps.
Edit your kernel options, in Gentoo that means change your kernel lines to something like this.
The option “usb-storage.delay_use=1” reduces the settle time for a usb device from 5 seconds to 1 second. This is a workaround to get the USB devices ready in time for the raid detection. The option “rootflags=data=writeback” forces the kernel to mount the drive in data writeback mode. Without this option booting will fail when it tries to remount the partition read-write.
In other popular distros, such as Ubuntu, instead of editing the kernel lines themselves, you put this line in the file to tell the update-grub daemon how to generate the kernel lines. Put this in /root/flashroot/boot/menu.lst
Then you need to run update-grub. The procedure to do so would be something like:
Note that this has not actually been tested in Ubuntu so only try this if you know what you are doing.
Step 10: Clean up
Step 11: Pray that everything worked
Jesus, Buddah, FSM, whoever floats your boat.
Step 12: Restart
You’re done! Reboot.
If all goes well you should end up booting off your raid flash! If not then boot off that rescue cd and try to figure out where things went wrong.
Archived Comments
OS X running perfectly on a 3x8 GB Usb Stick Raid :)
Mon, 10/27/2008 - 13:50 — EDBBob
I’ve recently had the idea of raiding fast usb sticks. This weekend I tried it out and it runs very smooth. Next step will be to make a 64 GB raid built from 8x8 GB A-Data 200x sticks. They are very cheap where I live, and I’ve tested two of them in raid0 config with excellent results.
Btw. OS X installation is VERY easy on such a raid compared to Linux ;)
Have you had the raid running for a while? I’m a little concerned about the limited number of writes of the USB sticks, but they have a lifetime warranty, so they must be pretty solid.
Best regards
Morten
I have been a hardcore PC nerd for 25 years, until I recently bought a Mac after getting fed up with Vista and not feeling nerdy enough for Linux. I’ve had this strange smile ever since I did that, even though I love the “Infowants2befree” idea behind Linux…
OSX
Wed, 10/29/2008 - 08:09 — tim
I admit setting up the raid for linux was not easy. I would probably be using OSX myself if I wasn’t such a tightwad.
Those A-Data drives are priced better than the 2gb cruzers I bought half a year ago, and apparently perform better. I’m interested in what kind of speed you would get with 8 of them.
Copied from my post on the benchmark page:
Ive been running the flash raid since I posted this article. Thats been about 6 months as I write this. So far so good. The raid is sees probably about 8 hours of use per day. The raid definitely sees more reads than writes, but its under typical desktop use and nothing bad has happened yet.