Howto disable Consistent Network Device Naming

RHEL 7 (and CentOS 7) introduced the concept of Consistent Network Device Naming. In practice this means that an old network device name like eth0 would change into something enp0s25. Should it not be obvious how those funky new network device names are generated then read SystemD: Understanding Predictable Network Interface Names

Here are the steps to disable Consistent Network Device Naming on RHEL 7 or CentOS 7:

Step 1) add kernel boot args & regenerate the grub config

The following kernel boot arguments need to be added:

Open /etc/default/grub with your favorite editor and add those two options to the line starting with GRUB_CMDLINE_LINUX:

Now let’s regenerate the grub config with the following command:

[root@test ~]# grub2-mkconfig -o /boot/grub2/grub.cfg

And you should see output like this:

Step 2) add a udev symlink, just to make sure

Basically adding the biosdevname=0 and net.ifnames=0 arguments to grub should be enough. But here’s another way just in case:

[root@test ~]# ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules

After rebooting the host, the old familiar network ethX devices should be back.

Howto log extra Kickstart headers in nginx

When a server is kickstarted via PXE to deploy RHEL or CentOS it can send a few useful headers when requesting the Kickstart file. The following headers are sent if enabled in the PXE config:

X-Anaconda-Architecture: x86_64
X-Anaconda-System-Release: CentOS
X-RHN-Provisioning-MAC-0: eth0 11:22:33:44:55:66
X-System-Serial-Number: A1B2C3D4G5

Since these headers uniquely identify the requesting host, they can be used by a web app to modify the Kickstart file before it is sent back. You could for example define a specific network setup or partitioning layout.

In RHEL7 or CentOS7 you can enable these headers by adding inst.ks.sendmac and inst.ks.sendsn to the PXE config. Here’s an example:

By default nginx does not log these headers. Here’s how to make nginx log them.

1) add the headers to the log_format of your choice

In this example I just use the default ‘main’ log_format and have appended the headers to log. Open the file /etc/nginx/nginx.conf with your favorite text editor and add the lines beginning with $http_x_ (so the last 4 lines):

You can add additional X-… headers off course. Just make sure that you prepend them with $http_ and that all characters are converted to lowercase and that any dashes ‘-‘ are converted to underscores ‘_’.

2) enable the log_format in your nginx server config

You now need to enable the log_format above (‘main’) to the ‘server’ section in your nginx config. Open the config file for your virtual server and modify the access_log line so that it uses the ‘main’ format:

Restart nginx, PXE boot a VM for deployment and see the headers show up in the nginx log:

Notice the ‘?’ in the GET /ks/?CentOS-7_x86_64.ks? That means that it asks for the default index with argument CentOS-7_x86_64.ks. Fully written it looks like this:

GET /ks/index.php?CentOS-7_x86_64.ks

By just using the ‘?’ you are more flexible as you can change the ‘index’ config option in the nginx configuration without having to change the PXE boot config.

Howto fix CentOS 5 VM not booting with a kernel panic

While I have moved on from CentOS 5 a long time ago, on rare occasions I need to access an oldĀ  CentOS 5 VM. Today was such an occasion and it resulted in the VM not booting with a kernel panic. It took a bit of digging to figure out what was going on and how to fix it so I though I share it to save you some time.

The symptoms

When booting the CentOS 5 VM I saw the following messages:

Note the Trying to resume from label=SWAP-vda3 followed by Unable to access resume device (LABEL=SWAP-vda3). Apparently last time this VM was used (running on a CentOS 5 host) it was perhaps paused or saved which is the reason why it now wants to resume from swap. And something seems wrong with the swap space in the VM. It might be corrupt or using this CentOS 5 VM on a CentOS 7 host instead of it’s old CentOS 5 host requires some changes.

So the swap partition and references to it in /etc/fstab need to be checked and fixed if neccessary. At least that’s obvious.

What’s not so obvious is that when a CentOS 5 VM goes into such a state is that the initrd will be recreated with instructions to resume from swap. So not only do you need to fix the swap space, check /etc/fstab and fix any references if required but you also need to recreate the initrd.

Here are the steps to fix both issues:

Boot VM with rescue mode

Boot the VM and choose F12 during boot so you can select PXE. Once the PXE options are available you don’t boot from local harddisk but instead boot a rescue image. I just used a CentOS 6 rescue entry which I always have available when booting with PXE.

The PXE label config looks like this:

And the rescue.ks kickstart file looks like this:

Once you have booted into rescue mode your CentOS 5 VM harddisks should be automagically mounted and you are presented with the option to start a shell, run a diagnostic or reboot. Select Start shell and activate the chroot:

Quick tip: you can now start the SSH server so you can ssh into the VM and do your work from a terminal instead of the VM console.

Check and fix swap

Let’s see what partitions this VM has:

And let’s see what the label of the swap partition is:

And if that fails, try:

So there are references to SWAP-vda3 (where vda refers to a virtio disk) in a VM that only ever had IDE named devices. That does not seem right. Let’s recreate the swap partition with the proper label:

Check and fix /etc/fstab

Let’s see what’s in /etc/fstab:

Note again the reference to the vda3 (virtio) partition while the label of our swap partition is now SWAP-sda3. So let’s fix the swap line in /etc/fstab so it has the proper label:

Optionally install the latest updates

Before recreating the initrd you can optionally install the latest updates if you have not updated the VM recently and feel adventurous. The reason I mention adventurous is that if you run yum update while booted via a rescue image and accessing the VM via chroot and a new kernel is installed, you will probably see the following error during the kernel installation:

The reason for the error is that when booting via a rescue image and accessing the VM via chroot, /dev/root does not point to the root partition (/dev/sda2 in this VM). Unfortunately the scripts that are run when a new kernel is installed, new-kernel-pkg and grubby, can’t handle that situation gracefully so grub.conf is most likely not updated with the new kernel details. If that is the case you will need to manually add an entry for the new kernel to /boot/grub/grub.conf:

Note that hd(0,0) might not apply to your setup and require a different entry.

Recreate the initrd

Go to the /boot directory and check what the version is of the most recent kernel:

So the version is 2.6.18-371.12.1.el5

Next move the old initrd out of the way:

Now let’s recreate the initrd:

Also check if the latest kernel is the default one that gets booted. There should be an entry default=0 in /boot/grub/grub.conf. The 0 means that the first kernel entry (the one at the top with kernel version 2.6.18-371.12.1) will be used.

Before you reboot

Before you reboot, check the hardware details of the VM. In my case the VM required the Disk bus of the harddisk to be IDE. Check the setting in virtual-manager or in the xml config file of the VM and make sure you have the correct settings.


Finally reboot the VM to see if it worked. From the console:

sh-3.2# sync
sh-3.2# exit
bash-4.1# exit

And then select reboot followed by Ok.