adj - subject to whims, crankiness, or ill temper

I use Gentoo Linux both at home and at work. Every so often I hit some snag and I'd like to detail the fix here both for my benefit and to possibly help anyone else having a similar issue.

Wednesday, September 17, 2008


This isn't really Gentoo-specific, but it's a nice trick that you could use to share an ssh session with someone. It works like VNC where both people have full control over the session at the same time.

You need to have GNU screen installed:
emerge -av app-misc/screen

Start a screen session using:
screen -S

Have the other person start screen using:
screen -x

Friday, March 14, 2008

XFS fragmentation

Check your fragmentation levels:
# xfs_db -c frag -r /dev/vg/lv1
actual 37387, ideal 35541, fragmentation factor 4.94%
# xfs_db -c frag -r /dev/vg/lv2
actual 688725, ideal 667471, fragmentation factor 3.09%
# xfs_db -c frag -r /dev/md3
actual 631947, ideal 624800, fragmentation factor 1.13%

On Gentoo, xfs_db is in sys-fs/xfsprogs which, if you have an XFS filesystem, you should already have installed.

If you want to run the defragger, the command is xfs_fsr and on Gentoo you need to install an additional package, sys-fs/xfsdump, to get it. You can read the manpage on xfs_fsr for more info, but the gist is if you don't otherwise supply command line params it will start going through all of your xfs mountpoints and stop after either 10 passes or 7200 seconds. It keeps track of where it was so you can just run it again and it will pick up where it left off if it didn't make it through all 10 passes.

Sunday, February 17, 2008

dmraid != kernel raid

I didn't mention it in the previous post about migrating to hardened gentoo, but initially when I went to re-add the drive back to the mirror I was getting some errors and it wouldn't let me. mdadm told me this:
mdadm: Cannot open /dev/hde1: Device or resource busy
I got similar output trying to add hde3 back to /dev/md3. Those partitions are only used for raid so it was really bothering me that it said they were in use. I googled around a bit and found a reference to the device mapper (which starts up on bootup for me because I use LVM) creating some devices based on a motherboard raid controller. You can get a listing of what the device mapper has created using dmsetup ls. I ran the command and sure enough there were 4 devices listed starting with nvidia_. I was able to remove 3 of the 4 using dmsetup -C but one was saying it was busy and in use and I still couldn't add the partitions to the raid. So I went back and edited grub.conf to remove dodmraid from the kernel line and restarted the system. After it came back up I was able to hot add the 2 partitions and get the mirrors back up and running in a non-degraded state. Also, dmsetup ls now only shows my LVM VGs. I went back and edited my genkernel.conf to tell it to stop adding dmraid support to my initrd in the future.

Saturday, February 16, 2008

Migrate an existing Gentoo system to hardened profile

This post is about migrating a system running a current amd64 profile to a hardened profile and all the things entailed in setting up a reasonably "hardened" Gentoo system. I've been wanting to use hardened but in the past when I have looked into it, the process of switching would have required a downgrade of libc that portage doesn't want to allow. Currently the hardened profile uses the same libc that I already have so this presents the opportunity to do the switch.

Covering my ass...

Since this is a potentially deadly operation (the general consensus in #gentoo-hardened was that some people have done it and it's probably ok, BUT Bad Things© could happen) so they don't really recommend doing so. Because of this, I'm doing the following to help mitigate data loss.

I shut down most of my services (switching to single user mode would be better, but I was too lazy to hook up monitor/kb/mouse to server...) and ran a backup to get a snapshot of the system. My /boot and / partitions are mirrored using kernel raid and I told mdadm to kick the second drive out of each of the arrays:
# mdadm /dev/md1 -f /dev/hde1
# mdadm /dev/md3 -f /dev/hde3
# cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md1 : active raid1 hdg1[0]
40064 blocks [2/1] [U_]
bitmap: 2/5 pages [8KB], 4KB chunk

md0 : active raid5 sdd1[1] sdc1[2] sdb1[0] sda1[3]
937705728 blocks level 5, 128k chunk, algorithm 2 [4/4] [UUUU]
bitmap: 0/150 pages [0KB], 1024KB chunk

md3 : active raid1 hdg3[0]
155244032 blocks [2/1] [U_]
bitmap: 57/149 pages [228KB], 512KB chunk

unused devices:
So now all of my changes for hardened will be occurring on one drive, If I completely screw up the system and can't fix, I can just switch to the "faulty" drive and rebuild the array. If all goes well, I just re-add the partitions to the 2 raids and they'll resync and all will be rosy.

Getting to the task at hand...

First, switch to the new profile. You can do this with eselect. Let's see what's avaialable on the system:
# eselect profile list
Available profile symlink targets:
[1] default-linux/amd64/2006.1
[2] default-linux/amd64/2006.1/desktop
[3] default-linux/amd64/2006.0/no-symlinks
[4] default-linux/amd64/2006.1/no-multilib
[5] default-linux/amd64/2007.0 *
[6] default-linux/amd64/2007.0/desktop
[7] default-linux/amd64/2007.0/no-multilib
[8] default-linux/amd64/2007.0/server
[9] hardened/amd64
[10] hardened/amd64/multilib
[11] selinux/2007.0/amd64
[12] selinux/2007.0/amd64/hardened
Change the profile:
# eselect profile set 10
Next, you need to build the hardened toolchain:
# emerge -av --oneshot binutils gcc virtual/libc
Tell the system to use the new (older) hardened gcc profile:
# gcc-config -l
[1] x86_64-pc-linux-gnu-3.4.6
[2] x86_64-pc-linux-gnu-3.4.6-hardenednopie
[3] x86_64-pc-linux-gnu-3.4.6-hardenednopiessp
[4] x86_64-pc-linux-gnu-3.4.6-hardenednossp
[5] x86_64-pc-linux-gnu-3.4.6-vanilla
[6] x86_64-pc-linux-gnu-4.1.2 *
# gcc-config x86_64-pc-linux-gnu-3.4.6
* Switching native-compiler to x86_64-pc-linux-gnu-3.4.6 ...
>>> Regenerating /etc/ [ ok ]

* If you intend to use the gcc from the new profile in an already
* running shell, please remember to do:

* # source /etc/profile

# source /etc/profile
Slight change to the /etc/make.conf CFLAGS (adding -fforce-addr, I don't know what it does but if you download a hardened stage tarball, it's set in the make.conf by default so I'm adding it here) Substitute my march for yours, of course:
CFLAGS="-march=k8 -pipe -O2 -fforce-addr"
Next, I do a test emerge command and look for green (use flags that are changing state). The reason you need to do this is each profile has a set of profile defined USE defaults. The new hardened profile added a couple and removed a few in my case. So basically, do an emerge -ave world and look for green and * which signifies a change in the use flag since the last time you merged a package. Add or remove corresponding use flags to /etc/make.conf (or use app-portage/ufed as I do). Keep running the emerge -ave world and saying n until you are happy with the output and then hit y to actually start merging.
# emerge -ave world
If you run into any snags (a package fails to build), just note the package that failed and restart the emerge with "emerge -ave world --resume --skipfirst". Obviously things can get a little tricky if the package with the problem is a core system library or something, but if you don't use --resume, it's going to start rebuilding the WHOLE system again. In the past I've found it's relatively safe to "fix" the problem in another shell while continuing to build in the primary shell.

So about 9 hours and 312 packages later it's done. I restarted most of my network services just to make sure they wouldn't blow up right off the bat and everything seemed alright so far. I emerged hardened-sources while the world was rebuilding so I kicked off genkernel to configure (according to the various hardened guides), build and install the new kernel with hardened sources. After that I rebooted and everything still came up OK.

After testing things out a bit, I re-added the second drive to the mirrors and let the arrays resync:
# mdadm /dev/md1 -a /dev/hde1
# mdadm /dev/md3 -a /dev/hde3
So those are the basic steps to switch over to hardened. Remember, always have backups ready before you do something like this.

Tuesday, January 29, 2008

keypad is wonky in nano

For a while now on multiple Gentoo boxes I've had an issue where the keypad keys function as if the NumLock was off even when it was on. If I tried to type a 0 it would kick off "insert from file" function. A little searching on Google revealed the following from the nano manpage:
 -K (--rebindkeypad)
Interpret the numeric keypad keys so that they all work properly. You should only need to use this option if they don't, as mouse support won't work
properly with this option enabled.
To fix this system-wide for everyone, just add the following to /etc/nanorc (or, on Gentoo, uncomment the line as it's probably already there:

## Fix numeric keypad key confusion problem.
set rebindkeypad
Alternatively, add the line to your ~/.nanorc.

Friday, January 25, 2008

I can't find my UUID!

In the last post I showed how to reference a partition in /etc/fstab using a UUID and ran through a couple real-world scenarios for wanting to do so.

This post is about the trouble I ran into looking for said UUIDs...

I have 6 hard drives in my server, 2 have 3 partitions each (boot, swap, root) and are raid mirrored, the other 4 have 1 each and are set up as raid 5. I'm only concerned with the first 2 for this post. After seeing the error about mounting the swap on the last bootup I figured it would be an easy fix. I knew how to list the UUIDs so I typed in that command and was greeted with a seriously lacking list of partitions:
# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 2008-01-25 19:44 c2ceffb9-90be-4564-a946-9d37de7725ba -> ../../hdg2
lrwxrwxrwx 1 root root 22 2008-01-25 19:44 ca583626-4a25-4af7-b6c5-8e59a502dbc2 -> ../../mapper/vg-ballzy
lrwxrwxrwx 1 root root 22 2008-01-25 19:44 f5cc881f-210a-431f-8d52-f1e5b512b57b -> ../../mapper/vg-backup
As you can see, none of the partitions from hde are listed, and only the one from hdg is listed. I'm not sure how or why the other ones are not listed.

Since the swap partitions weren't mounted I first tried mkswap to just "reformat" the swap:
# mkswap /dev/hde2
Setting up swapspace version 1, size = 1028153 kB
no label, UUID=56c2f2af-86dd-4390-ae1a-c7fb71e6ed05
Ok, Looks good so far. Let's try turning it on:
# swapon UUID=56c2f2af-86dd-4390-ae1a-c7fb71e6ed05
swapon: cannot find the device for UUID=56c2f2af-86dd-4390-ae1a-c7fb71e6ed05
But...mkswap just told me the can it not be found?!?!?

After a little digging, I came up with the vol_id command and it clued me in to the problem:
# vol_id /dev/hde2
Raid member? OK, I admit, I used to mirror my 2 swap partitions, but after seeing the performance I decided against the protection it afforded and just went back to adding 2 separate swap partitions. It seems the raid superblock was still in the partition and mkswap wasn't overwriting it for whatever reason.

After another quick google search for deleting a raid superblock, I found the proper command and here are the results:
# mdadm --zero-superblock /dev/hde2
# vol_id /dev/hde2
Ahhh, the real UUID, and it sees it as swap as well. I then proceeded to update the /etc/fstab after which swapon -a correctly enabled both swaps. As to why the UUIDs are not listed under /dev, I don't know. Maybe after a reboot the other swap will show up? The other /dev/by-* listings show all the partitions properly.

EDIT: Since I'm running Gentoo, a simple udevstart causes udev to restart. Now,
ls -l /dev/disk/by-uuid/
shows both hde2 and hdg2 :).

The mystical UUID

Every filesystem (partition?) should have a uuid. On modern Linux systems you can see them with the following command:
# ls -l /dev/disk/by-uuid/
total 0
lrwxrwxrwx 1 root root 10 2008-01-25 19:44 c2ceffb9-90be-4564-a946-9d37de7725ba -> ../../hdg2
UUIDs, while a bit cumbersome to look at, are extremely nice because you can use them in a lot of places instead of a normal device name (such as /dev/hdg2 in the above example).

Tonight I moved the 2 hard drives I had plugged into the onboard IDE controller of my motherboard into a Promise Ultra100 card. Because of this, the kernel renamed the partitions from /dev/hda and /dev/hdc to /dev/hde/ and /dev/hdg. Upon booting the system I saw the following:
swapon: cannot canonicalize /dev/hda2: No such file or directory
swapon: cannot stat /dev/hda2: No such file or directory
swapon: cannot canonicalize /dev/hdc2: No such file or directory
swapon: cannot stat /dev/hdc2: No such file or directory
UUIDs will help this to never happen again.

Here are the relevant lines from my old /etc/fstab:
/dev/hda2   none    swap    sw,pri=1    0 0
/dev/hdc2 none swap sw,pri=1 0 0
And the new lines:
UUID=fe6bffd9-5b6b-4db9-8929-cf1575a72d67   none    swap    sw,pri=1    0 0
UUID=e2992cf5-bc3a-4b3a-a920-d9dfbe7a5a9a none swap sw,pri=1 0 0
As I said, it doesn't look as pretty, but look what happens with the old /etc/fstab:
#swapon -a
# cat /proc/swaps
Filename Type Size Used Priority
and the new:
# swapon -a
erma ~ # cat /proc/swaps
Filename Type Size Used Priority
/dev/hde2 partition 1004052 0 1
/dev/hdg2 partition 1004052 0 1
If you haven't figured it out by now, by specifying partitions by UUID, you remove the dependency on where they are physically plugged into the motherboard and any kernel naming conventions. I recently had my SATA drives move around a bit after a BIOS update, so UUIDs would help out there as well.

As it happens I had some trouble finding the (correct) UUID of one of my swap partitions but that's the topic of my next post.