About to Get Some New Hardware Around Here

I have acquired a rather large UPS. Used of course, but wasn’t of much use to its previous owner. I also have come across a quad-Xeon workstation, with 16GB of RAM, which I think would make a splendid server for the blog. I’ve had the Xeon for a while, actually, but what’s kept me using the older box is that it’s pretty miserly with electricity, giving it a nice run time on UPS power. Well, now I have a big honkin’ UPS, so that’s less of a concern.

But the quad-Xeon is much faster, and has twice the RAM. The only issue is, I’m using dmraid (FakeRaid) in the current box to do mirroring, because I was short sighted, and didn’t think through the consequences. Ordinarily, I’d just move the mirrored pair over, and things should be fine. But I have to break the dmraid pair, and convert to to Linux software RAID. This is doable, but a bit of a PITA. So I need to do this after hours. I think I can make the transition without incurring any serious downtime.

My plan so far is to remove one drive from the current mirror, and let it run degraded. Set up a degraded software mirror on the new system with the drive I took out, and then copy data from the current box over, while live. Once the new box is running as a copy, albeit an outdated one, briefly shut down the current box and update the files that have changed. After that I should be able to come up on the new server, add the other drive, rebuild the mirrored pair, and we’re good to go.

7 thoughts on “About to Get Some New Hardware Around Here”

  1. Linux might be really good at doing this, but using a single drive to create a degraded mirror sounds a stretch.

    I really would be looking at hardware RAID, even if I had to buy a suitable controller from somewhere.

    1. The blog just runs a mirrored pair, and hardware RAID isn’t going to do a whole lot for you. I’d also lose the ability to remove the pair and plug it into another machine and have it work. I just took the drive out live, and we’re still running… so step one is good.

    2. I’ve done this before with Linux… it works fine. You can set up a degraded array, then once you have the data, add the mirror drive, make all the same partitions, then tell the Linux RAID driver to add the partition to the degraded pair. It’ll start rebuilding the pair. Once finished, it’ll start working fine, as if it had been created normally all along.

  2. Have you looked into DRDB?

    You could rsync the old to the new, take a quick downtime hit, and then use DRDB to keep the new in sync with the old until you move.

  3. Wow.

    I don’t know whether its good or bad that I’ve never messed with software-RAID on Linux. It does sound like you’ve got a good understanding of what Linux can do with SW-RAID.

    Hope it finishes well.

  4. It will work, but then you further the issue into the new server. Also end up carrying over the bad with the good.

    We do this en masse, and almost always rebuild from scratch. We learned over the years that if it is too hard to migrate our data to a new host, we need to better manage the applications and data. We get this a lot with our clients (we manage up to 40,000 TB of data at a time).

    My suggestion: build the new server and then move the applications and data when done. It’ll save you heartburn in the long run. Do it the way you want today and incorporate the lessons learned yesterday.

    Then do it again next year. ;)

    1. There’s no issue with the current built to further an issue. The only issue I want to do away with is the FakeRAID, which is largely already done. At some point I’ll probably do a fresh build and data migration, but not right now.

Comments are closed.