Open Source Horror Story – A Linux Recovery Tale

Are you in the market for a new laptop, desktop or server PC with Linux installed? Please give us the opportunity to quote a preloaded Linux laptop, desktop or server system for you.

Hi children! I know it is a bit early for scary tales. We usually get to those in October. But I have one for you that you just might want to hear now. So. Get your hot cocoa, your S’mores and your sleeping bag and come over here by the fire. I have a tale of chills and thrills to tell you young’uns. There now. Are you all snuggled in and ready for a scary tale? Good. Here goes …

It was late on an August evening. August 30th to be exact. A brave independent consultant and Linux administrator was finishing up a long, slow upgrade from Mandriva 2010.2 to Mandriva 2011 for a client. He had noticed the upgrade was taking an excessively long time, but as this was only his second upgrade of the new release of Mandriva, he chalked it up to the new release of Mandriva. Little did he suspect the slow upgrade was due to … due to … oh, I can hardly say it to you sweet, innocent young’uns. But to tell the tale properly I must say it … A FAILING HARD DRIVE! (Look! I have goose bumps!)

When he rebooted following the last stage of the upgrade, he saw a … a … a … KERNEL PANIC! The system could not find the root / boot partition. So, he booted a PartedMagic Live CD to access the drive and see what was wrong. But PartedMagic refused to mount the partitions too. When he checked with GParted he saw that the /home partition, which he knew to be an XFS file system, was being “reported” as a “damaged” EXT4 file system. This looked bad. Very bad. So, he ran GSmartControl and tested the drive. Oh no! The drive was giving errors by the megabyte! Oh the horror! The angst! The tearing out of the hair … Okay, so he’s 50ish and mostly bald on top with a ponytail. He really avoids pulling out what hair he has left. But you get the picture.

Okay, not to worry. He had sold the client a new, spare hard drive just the right size to replace the failing drive. He also “knew’ the client had backups, because he had set up the backups for them and told them how to run them. Plus they had periodic automatic backups as well and had been told how to check that the backups were running and completing successfully. But when he checked for the most recent backup … it was in May! No one had been running the manual backups and the automated backups were returning error logs that NO ONE WAS READING! (Yeah, he should have run an “extra” backup himself, but time was pressing because he had a time limit from the client to get the upgrade done. The time limit left no time for a backup.)

Now things were starting to look grim. He knew that losing three months of financial data stored in QuickBooks in the XP Professional virtual machine on the /home partition of the client’s drive could be a disaster for this small business client. Thinking it over, he decided the only solution was to run xfs_repair on the /home partition. So he did. Lo and behold, it worked! Well, somewhat. There were hundreds of megabytes in lost+found but the user directories showed up and most of the files were there, including what appeared to be the XP Professional virtual machine directory named .VirtualBox in the user account that ran the VM. Unless you have been in this position, my children, you have no idea the sense of relief this brave Linux denizen felt. But it was a premature relief, as you shall see.

He immediately shutdown the system and installed the spare hard drive. Then our brave lad rebooted with the PartedMagic Live CD and ran GParted again to create a new partition layout. Then he ran Clonezilla to clone the recovered /home partition to the new drive. Keeping his fingers, toes, arms, legs and eyes crossed for luck. (Did I mention he is a contortionist? No? Well, he’s not. That sentence is just for “color”.) The clone completed successfully and our intrepid Linux fellow shut down the system, removed the naughty hard drive, and gave it proper rites before smashing it with a sledge hammer. (Yeah, you guessed it, more “color”.)

Then he reran the “upgrade”, which was now morphed into a fresh install of Mandriva 2011 on the new hard drive. It was 4:00 AM on August 31st at this point. He was now into his 14th hour of an “upgrade” that had been supposed to take less than six hours by prearranged agreement with the client. By 7:30 AM, when the client’s staff began arriving, he had the system “finished”. The printer was printing. The scanner was scanning. The VM was booting. The rooster was crowing … just checking to see if you are paying attention. All appeared well and the client was understanding about hardware failures happening. After going over backup procedures with the client, again, our weary Linux consultant headed home for a short nap before starting his new business day.

Later that day he received a call. Yes, children, it was the client. The QuickBooks data was showing nothing past April 2010. Since this was August 2011, that was a Very Bad Thing. So, our fine Linux fellow headed back to the client and the “problem” system as he was now calling it. Upon review he discovered the restored virtual disk was one that had been a backup made in April of 2010 prior to an upgrade of VirtualBox at the time. Where was the most recent virtual disk with the client’s data? Gone. Vanished. Eaten by an evil hard drive. But, a light appeared above our hero’s head! Due to having had some sleep and some caffeine, he remembered that QuickBooks had been reinstalled with a new release in late June of 2011. He Had A Backup Of The System On A USB Drive From That Day! Yes, it would still mean losing two months of data. But that was much more acceptable in the client’s view than losing a year and a half of financial data. Which would mean near certain doom for almost any small business.

So, our Linux protagonist retrieved the USB hard drive, attached it to the system and ran a restore to get the virtual machine back from June 2011. This worked successfully and the VM booted. A check of the VM showed the data from June was there and intact. Our nice Linux guy packed up his gear, went over backup procedures with the client, again. (See a trend here?) Then headed home for supper and a good night’s rest. The End …

Well, not yet. You see, losing data really irritates our Linux Paladin. His mind would not let go of the problem. He kept thinking there was something he missed. Something he could have done to get all the data back. Something … something … some* … Ah HA! He recalled that lost+found directory with the hundreds of megabytes in it! He quickly called the client and arranged to go on-site after hours on that 1st day of September 2011. He combed through the lost+found directory with the ‘find’ command searching for files around the correct size of our missing, most recent, virtual machine file. There was one hit, just one. But it was enough. He had found the latest copy of the virtual machine. After making a backup(!) he copied this file to the correct directory, set back up the virtual machine using this found file and all the financial data was recovered. Everyone rejoiced and there was much feasting. (Yep, “color”.) The Real End.

What is the moral of our story young Tuxes? It is this: Never rely on someone else to do a backup. Backup, backup, backup, backup, backup for yourself. Then when you think you have enough backups, do another backup. You can be sure our Linux star has learned that lesson … again.

Discuss this article on:

Linux: Bacula is for Everyone* (backup software)

* Well, almost everyone. If one just wants to backup a few files on random occasions then Bacula is not the software to use. But if one wants to run regular, scheduled backups to just about any type of storage media then Bacula will most definitely work.

I must admit, I have been a tar + cron Unix guy for over 20 years and never really considered anything else necessary for backups on Unix, until now. I recently decided to learn how to use Bacula to implement it for one of our clients that needs a new backup solution for their shiny, new PC systems and Linux server. The server is running Mandriva 2010.2 Linux with SAMBA and can easily handle adding Bacula to the mix. The PC systems are running Windows 7 Professional 64-bit, for which Bacula has a solution. During this process I have decided I can now add Bacula to my short list of "must have" Unix software for small, medium and large businesses.

In all honesty, I am still a Bacula novice. However, I am not a backup software novice and can already see, based on my slightly over two weeks of working with Bacula, that this is some excellent, well designed and well documented software. Bacula is also complex software and takes a willingness to study and learn before one can get one's mind around how it all works. Here is a PDF of a simple diagram I created based on my experience with Bacula for those who like to see graphics: Bacula Components

It can be daunting to begin working with Bacula if one is completely new to business backup systems, especially enterprise grade business backup systems. But with some study of the Bacula documentation, experimentation with several non-critical test backups and the Webmin (Warning!) Bacula module, the work to get several PC systems backed up on a regular schedule can be much easier. In my experience, it is easier than running something like Retrospect Express, a typical small business backup solution, on each PC.

Here is how it works on Linux in a nutshell. One installs an SQL database back-end, such as MySQL or PostgreSQL. Then one installs the Bacula components from one's distribution or downloads and compiles the Bacula components oneself. (The former method is recommended unless one needs to compile Bacula from source for some reason.) Then, one runs these commands to set up the Bacula database (In our system these are in /usr/lib/bacula and are symbolically linked to the actual script to run for the database chosen.):

  • create_bacula_database
  • make_bacula_tables
  • grant_bacula_privileges

One's Linux distribution may or may not run these for one. By default the database is password-less. One may or may not wish to add a password to the Bacula database. If one does, then the password needs to be used in the Director configuration file.

Then the configuration files need to be set up for one's system and LAN. The files one needs to edit are bacula-dir.conf, bacula-fd.conf, bacula-sd.conf, and bconsole.conf. (In our system these are in /etc/bacula). This can be a bit confusing at first, but experiment and keep reading the documentation. Eventually the way it works should "click" in one's mind. Since Bacula integrates all the components at the Director, once all the system configuration files are done one can then do all the work to create storage volumes, create backup jobs, and so on using the bconsole program at the command-line or the Webmin Bacula module in a web browser. We recommend Firefox.

Here are some example files from my working test setup here at the ERACC office.

File Daemon, bacula-fd.conf, on a PC to be backed up:

#
# List Directors who are permitted to contact this File daemon
#
Director {
  Name = router-dir
  Password = "BigSecretStuff"
}

#
# Restricted Director, used by tray-monitor to get the
#   status of the file daemon
#
Director {
  Name = era4-mon
  Password = "MySecretStuff"
  Monitor = yes
}

#
# "Global" File daemon configuration specifications
#
FileDaemon {                          # this is me
  Name = era4-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /var/lib/bacula
  Pid Directory = /var/run
  Maximum Concurrent Jobs = 20
  FDAddress = 10.10.10.4
}

# Send all messages except skipped files back to Director
Messages {
  Name = Standard
  director = router-dir = all, !skipped, !restored
}

The passwords can be any text string one desires, including random characters, as long as they match when each daemon tries to contact one another.

Storage Daemon, bacula-sd.conf, on the system handling the storage media:

Storage {                             # definition of myself
  Name = router-sd
  SDport = 9103
  WorkingDirectory = /var/lib/bacula
  Pid Directory = "/var/run"
  Maximum Concurrent Jobs = 2
  SDAddress = 10.10.10.100
}

#
# List Directors who are permitted to contact Storage daemon
#
Director {
  Name = router-dir
  Password = "BigSecretStuff"
}

#
# Restricted Director, used by tray-monitor to get the
#   status of the storage daemon
#
Director {
  Name = router-mon
  Password = "OurSecretStuff"
  Monitor = yes
}

Device {
  Name = Data_r0
  Media Type = File
  Archive Device = /data_r0/bacula
  LabelMedia = yes;                   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

#
# Send all messages to the Director,
# mount messages also are sent to the email address
#
Messages {
  Name = Standard
  director = router-dir = all
}

The Director configuration file, bacula-dir.conf, is rather large, so I will just post some of the parts that one needs to edit to get started.

The section of bacula-dir.conf that tells the Director about its own setup:

Director {                            # define myself
  Name = router-dir
  DIRport = 9101
  QueryFile = "/etc/bacula/scripts/query.sql"
  WorkingDirectory = /var/lib/bacula
  PidDirectory = "/var/run"
  Maximum Concurrent Jobs = 2
  Password = "BigSecretStuff"         # Console password
  Messages = Daemon
  DirAddress = 10.10.10.100
}

The Name should be unique.

The section of bacula-dir.conf where one will tell the Director the database password, if one set a database password. Otherwise, leave this section alone.

# Generic catalog service
Catalog {
  Name = MyCatalog
  dbname = "bacula"; dbuser = "bacula"; dbpassword = "dbSecretStuff"
}

Here is the bconsole.conf configuration file:

#
# Bacula User Agent (or Console) Configuration File
#

Director {
  Name = router-dir
  DIRport = 9101
  address = 10.10.10.100
  Password = "BigSecretStuff"
}

As stated near the beginning of this article, Bacula is well documented. One should be ready to spend some time reading documentation and looking at the configuration files before starting on a Bacula implementation. Once one does "get it" then using Bacula to backup one, dozens or hundreds of PC systems should be easy to understand and use.

Warning! We strongly recommend reading the documentation and learning how things work at the command-line before using Webmin! Webmin cannot substitute for lack of knowledge. (Go back.)

analytics
Powered by web analytics statistics.

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.

Linux: Using Remote Wakeup (Wake on LAN)

Here is the scenario, you are an independent IT consultant and/or an administrator of some business IT infrastructure. The systems you manage are a mix of Linux and Microsoft desktop and server systems. You do much of your system updates and other management tasks after hours using remote access over VNC or a VPN so you can be home with your family. The upper management, also know as “suits”, at your location has decided that shutting down PC desktop systems after hours is a great cost savings measure and tells you to implement a plan to do this. The suits also want the desktop PC systems to be up and running when employees arrive in the mornings so time is not wasted while people wait for their PC to start up. How do you do all this and still give yourself that time at home at night when you need to do those after hours management tasks? Enter Remote Wakeup, otherwise known as Wake on LAN.

I was presented with a challenge like this for one of our charity organization clients that is trying to cut costs as much as possible. The idea originally was that each PC will shut down each night after running its nightly backup. Then each morning the systems would be restarted by the users when they came in. Since most of the support provided by my company happens after hours this meant we needed to implement Wake on LAN so we could still provide that after hours support while giving the client the cost savings from shutting down the PC systems overnight. To do this we use the ‘wakeonlan’ Perl application with a mix of cron job and hand created scripts to wake up PC systems as needed. Each PC that needs to be awakened has its BIOS set with Wake on LAN enabled. One Linux PC system is set up as the “master” system for managing Wake on LAN and that PC is never shut down. That one PCs BIOS is configured to auto-restart that PC after a power outage so it will always be available.

Another hitch in this request is that some of the Windows desktop users do need access from home either after hours or over weekends. These people are completely unfamiliar with Linux and so need to be given an easy way to access the ‘wakeonlan’ capability of the Linux PC that handles this. This is accomplished by giving them PuTTY on Windows at home with a saved session that logs them into an account on the Linux PC over SSH. From there they just type ‘wol’ and are given a menu from which they can choose the PC they need to “wake up”. Here is a copy of the ‘wol’ script as it exists today:

#!/bin/bash
# wol script by ERA Computers & Consulting www.eracc.com
clear
wolmenu="wol.menu"
woldata="wol.data"
wolloc="`dirname \"$0\"`/"
if [ ! -f $wolloc$wolmenu ]
then
     echo Cannot find my menu file. My files should be in $wolloc.
     elif [ ! -f $wolloc$woldata ]
     then
          echo Cannot find my data file. My files should be in $wolloc.
     else
          cat $wolloc$wolmenu
          echo;echo Type the number of the PC to awaken or c to cancel and press Enter:
          read n
          case $n in
               c) exit;
               ;;
               C) exit;
               ;;
               *) echo Waking up `grep ^$n $wolloc$wolmenu`.;
               ipsubnet=`grep ^$n $wolloc$woldata|cut -d ' ' -f 3`;
               hwaddress=`grep ^$n $wolloc$woldata|cut -d ' ' -f 2`;
               echo The command running is - wakeonlan -i $ipsubnet $hwaddress;
               wakeonlan -i $ipsubnet $hwaddress;
               ;;
     esac
fi

There is more that could be done to check for invalid input and to check to see when a PC starts responding to pings, but this serves our needs just fine as it is written. The wol.menu and wol.data files contain the information needed to present the users with a selection and then to take the selection and send a wakeup signal to the selected hardware address on the LAN. Here is the menu structure:

Number PC Name             HW Address         IP Address
====== =================== ================== ================
1      ACCOUNTING          00:11:22:33:44:50  192.168.1.10
2      FINANCE             00:11:22:33:44:51  192.168.1.11
3      MANAGER             00:11:22:33:44:52  192.168.1.12

Here is the data file that corresponds with the menu:

1 00:11:22:33:44:50 192.168.1.255
2 00:11:22:33:44:51 192.168.1.255
3 00:11:22:33:44:52 192.168.1.255

Yes, I know we could grab the data direct from the menu using some tool other than ‘cut’. However, what is done here works even though it is not as elegant as some would like. If some of you with elite bash scripting skills would like to share how to do this with just the menu, please do so in a comment.

The one other item we need to address is waking up all the PC systems before the employees arrive in the morning on Monday through Friday each week. This is done in a cron job on the same Linux PC. Here is how the job might be set up:

30 6 * * 1-5 wakeonlan -i 192.168.1.255 -f /home/user/scripts/autowol.data

What this does is tell the cron scheduler to run the command “wakeonlan” at 6:30 AM (“30 6”) every Monday through Friday. (“1-5”) This command reads the hardware addresses from a file (“-f /home/user/scripts/autowol.data”) and then sends the wakeup signal to each address on the chosen subnet (“-i 192.168.1.255”). The hardware address data file looks like this:

00:11:22:33:44:50
00:11:22:33:44:51
00:11:22:33:44:52
00:11:22:33:44:53
00:11:22:33:44:54

It contains all the hardware addresses of the PC systems that need to be awakened at that time of day. One hardware address per line.

So, if you are an IT consultant, systems administrator or Joe User who wants to use Linux at home, perhaps this article gave you some idea of how to manage your own Remote Wakeup scenario.

free hit counter
free hit counter

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.

Linux: Monitor a Service with a Watchdog Script

Old Unix hands already know this, but new Unix (Linux) users may be asking, ‘What is a “watchdog script”?’ Basically it is a bash or other script that is run via cron periodically to check on a persistent service. The watchdog script takes actions based on the state of the service it monitors.

There are other examples of watchdog scripts on the internet. Just search for them using your favorite search engine to see them. Following is a watchdog script we created recently for a client to monitor an e-mail to pager system my company wrote for the client. Here is the script (with sensitive bits changed to protect the innocent):

#!/bin/bash
#
# watchdog
#
# Run as a cron job to keep an eye on what_to_monitor which should always
# be running. Restart what_to_monitor and send notification as needed.
#
# This needs to be run as root or a user that can start system services.
#
# Revisions: 0.1 (20100506), 0.2 (20100507)

NAME=what_to_monitor
START=/full/path/to/$NAME
NOTIFY=person1email
NOTIFYCC=person2email
GREP=/bin/grep
PS=/bin/ps
NOP=/bin/true
DATE=/bin/date
MAIL=/bin/mail
RM=/bin/rm

$PS -ef|$GREP -v grep|$GREP $NAME >/dev/null 2>&1
case "$?" in
   0)
   # It is running in this case so we do nothing.
   $NOP
   ;;
   1)
   echo "$NAME is NOT RUNNING. Starting $NAME and sending notices."
   $START 2>&1 >/dev/null &
   NOTICE=/tmp/watchdog.txt
   echo "$NAME was not running and was started on `$DATE`" > $NOTICE
   $MAIL -n -s "watchdog notice" -c $NOTIFYCC $NOTIFY < $NOTICE
   $RM -f $NOTICE
   ;;
esac

exit

In case you are a new Linux administrator and are virgin to all things Unix-ish we will explain what this script does.

First of all, if you want to run a script unattended in cron then the first line with “#!”, called a “shebang”, tells whatever is calling the script what to use for processing the script. In this case we want to use bash so the line is “#!/bin/bash” for that. If this were a Perl script then the shebang line may look like “#!/usr/bin/perl”, depending on where the Perl executable resides on your system.

Following the shebang line are several lines of comments, which should be self explanatory. Then the variables used in our script are assigned. These too should be self explanatory. If not please post a comment to ask about them.

The “NAME=what_to_monitor” line is quite important for our purposes. This is the actual name of the program or script that would show up in a process list. We use that in the script to check if that shows up in the process list in the line:

$PS -ef|$GREP -v grep|$GREP $NAME >/dev/null 2>&1

Yes, we could actually try to find a process ID number (PID) for the application we want to monitor. However, as long as the application has a unique name in the process list the method used here will work just fine. There is more we could do to see if the application is hung even though it shows up in the process list. In the case of this particular process we are monitoring it will not hang but may die for some reason or another. If it dies then it will immediately, or nearly immediately, disappear from the process list.

The “$START 2>&1 >/dev/null &” line in our watchdog actually starts our process using the original process script itself from the service’s home directory. This could instead call the “/etc/rc.d/init.d/startupscript” for the script or program that is run as a service. The NAME variable, START variable and line to start the service would then look something like:

NAME=startupscript
START=/etc/rc.d/init.d/$NAME

$START start 2>&1 >/dev/null &

Presuming the startupscript uses the word “start” to start the service.

Once we have our script written we want to use it in cron. We use root’s cron for this but one could use any user that has the ability to (re)start system services. We save the watchdog script under /root/bin/watchdog, set it to be executable with “chmod 700 /root/bin/watchdog” and call it from cron using the following crontab line:

* * * * * /root/bin/watchdog

This causes the watchdog to run every minute so it checks the service as often as possible. One can modify the crontab line to run the watchdog whenever one needs it to run. But for persistent services that need to be running we always use a once per minute cron job for our watchdog scripts.

In this script we redirect the majority of our output to /dev/null because we do not want to inundate root’s, or the calling user’s, e-mail with cron job messages every minute. The default in cron is to mail the output from cron jobs to the calling user’s e-mail account. We do want to notify someone when a problem occurs causing our watchdog to trigger. So the NOTIFY and NOTIFYCC variables are set to the local or remote e-mail addresses of the people who need to be notified. Then these lines handle the notification message:

NOTICE=/tmp/watchdog.txt
echo “$NAME was not running and was started on `$DATE`” > $NOTICE
$MAIL -n -s “watchdog notice” -c $NOTIFYCC $NOTIFY < $NOTICE

Please feel free to post comments with pointers to other watchdog scripts or to “fine tune” what is shown here. Questions are also welcome.

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.

GNU/Linux: Server Upgrade Problem Solving

Notice: This article is not specifically about GNU/Linux. It is under our GNU/Linux category because the server in the article was and is a GNU/Linux based server. Some portions of the article deal with solving upgrade problems for applications that run on the underlying GNU/Linux distribution. In summary, this is a hardware and software article.

Recently my company had the opportunity to upgrade a server to Mandriva 2010 that was running an old version of the Mandriva GNU/Linux distribution. The system had been in place running along nicely for a few years and had not been upgraded to a new release in all that time except for some security patches. Then it started hanging mysteriously whenever under load from users opening Squirrelmail with large amounts of mail in the INBOX. Looking at logs, checking settings and system files revealed nothing. However, once the system was taken off-line, brought in-house to ERACC and the cover removed we discovered there were several popped capacitors on the old motherboard. This was determined to be the source of the hangs:

Gigabyte Motherboard Blown Capacitors

This old Gigabyte motherboard was from near the beginning of the AMD dual-CPU era when one could first put together a system with two AMD Athlon MP CPUs in it. It had a pair of these installed (AMD Athlon MP 2400+) and 512 MB of RAM. The Gigabyte board also had two PCI 64-bit slots, one of which was in use with an Adaptec 29160 SCSI controller that controls two SCSI drives. These were in a Linux MD RAID1 configuration except for the “/boot” partition. The small business owner of the server did not want to buy an entirely new server due to the current poor economy (Thanks to our current USA presidential administration and a complicit Congress. The bums.) and cash-flow being so tight. A new server could easily end up costing well over a thousand dollars. So my company was given the task of replacing this old motherboard with another from the same time frame and then doing an upgrade on the installed OS. Searching the web turned up some “recovered” (a.k.a. used.) Tyan S2469GN dual-CPU boards. These were not new but they were the best we were able to find for this system.

Luckily this particular server only handles smtp send/receive, some webmail and serves a few HTML pages for a small off-shoot business of the parent business. It would not be catastrophic for it to be down for a while. So, we could take the time to get things right while trying to keep things as inexpensive as possible. The client ordered one of the S2469GN boards and called us to come get it when it came in. Once we had the S2469GN here we discovered it was just slightly too large for the existing case. The S2469GN is a full sized Extended ATX, full CEB specification motherboard (12″ x 13″). We also discovered that the Gigabyte motherboard used a 20-pin power plug where the S2469GN uses a 24-pin power plug and requires an 8-pin power plug as well. So, we informed the client he would need a different case and an ATX EPS12V power supply. A search of old cases and power supplies at our offices and at the client site did not turn up anything we could use.

A search of new cases turned up the Antec P193 to handle the full sized EATX S2469GN motherboard. A search of power supplies came up with the BFG GX-550 ATX12V 2.2 550 Watt modular power supply. Both were found at online retail shops at a decent price that would not break the budget for this job. (ERACC does not sell individual components, only complete systems and some software licenses.) The client ordered these and once again called us to come get them when they arrived. For the record, the Antec P193 is a beautiful, roomy, well designed case.

Assembly of the system went smoothly due to the excellent design of the P193 case. The Adaptec and RAID1 configured drives were installed. The floppy drive (It is beige! Ack!) and CD drive (Also beige! Good Grief!) were installed. Then all power and data cables were connected, tied off and routed for air flow. The S2469GN has on-board ATI video. Powering up the system went well except for the floppy drive which failed to be recognized. This was replaced with a new (Still beige though!) floppy drive. Then the system passed POST and the old Mandriva distribution booted without a hitch. Now it was time to upgrade the system to the latest Mandriva 2010 release.

The Linux MD configured RAID1 was accessed using a Mandriva 2010 Live CD. The Mandriva 2010 KDE4 Live CD was found to be too “fat” for the old system with 512 MB of RAM so we used the Mandriva 2010 Gnome Live CD which was not quite so bad. We mounted one of our NFS server shares and backed up the critical data and configuration from the system by copying the relevant files and directories to a subdirectory on the NFS share. Then the system was rebooted to the installed OS. We selected the next version up from the installed Mandriva version from online repositories using http://easyurpmi.zarb.org/old/ because a big upgrade jump from the old version to the new 2010 release would probably fail. Then began the process of running updates with urpmi –auto-update -v to get the new version, then getting the new kernel, rebooting, and doing it all over again for the next release in line.

After going through several of these upgrade/reboot cycles one of the reboots was done before getting the newer Linux kernel installed. This usually would not be a problem, but for some reason the system completely lost access to the RAID due to this. After talking it over it was determined the best choice to go on would be to do a fresh install. Sure, we could have managed to get the new kernel on there and gone on with the upgrade cycles. But a decision was made to recommend the fresh install to make sure old cruft was gone from the system. After all, we had a backup of the important data and configuration files. The client was contacted and gave the “go ahead”. We decided to get rid of the RAID and just use both disks as discrete drives. The primary disk would hold the /boot files, /root, etc/, opt/, and so on. The second drive would hold /home and /var/www.

After replacing the old CD drive with a used DVDRW drive (At least this one is silver and black.) a fresh install of Mandriva 2010 was done and the needed applications were installed. Including Apache, Postfix, Squirrelmail, Courier (authdaemon, pop and imap) and so on. The backup of /home was copied back over. Settings for daemons were copied and edited as needed. Then, once all was in place, we began testing the system. While doing testing it was discovered that the old IMAP with Squirrelmail had created mbox style mail boxes under /home/(username)/Mail/* while the new setup needs maildir style mail directories and files under /home/(username)/Maildir/*. This was a conundrum as we did not want the end-users to lose access to their archived mail that was in the /home/(username)/Mail/* mbox style files.

After a bit of research three tools were used to solve this problem. One was built in-house and calls the other two to do the work:

  • maildirmake – a tool included with the Courier-IMAP package.
  • mbox2mdir – a mailbox to maildir converter by Sergey A. Galin.
  • convertmbox – a bash script built in-house to use the other tools and get the job done.

The maildirmake tool will create a maildir structure that can be used with modern IMAP maildir servers. Here is how a basic maildir will appear:

/home/user/Maildir/
/home/user/Maildir/cur/
/home/user/Maildir/new/
/home/user/Maildir/tmp/

When instructed to create new “folders” the Courier-IMAP server will create subdirectories off this structure like so:

/home/user/Maildir/.Saved/
/home/user/Maildir/.Saved/cur/
/home/user/Maildir/.Saved/new/
/home/user/Maildir/.Saved/tmp/

Here are the contents of the convertmbox script:

#!/bin/bash
for i in *; do
case "$i" in
Trash) echo "Skipping Trash file." ;;
*) maildirmake $HOME/Maildir/."$i" && mbox2mdir ./"$i" $HOME/Maildir/."$i"/cur ;;
esac
done

The convertmbox script is gzipped in the URL above. Use gzip -d convertmbox.gz to extract it if getting it from the URL above.

To use this one logs in as root, uses “su – username” to switch to a user, changes to the directory containing mbox style mail files and types /path/to/convertmbox to convert the files. After verifying a successful conversion one may then use “rm -rf mbox_mail_directory” to get rid of the old mbox style files and containing directory.

Testing the system with Squirrelmail following conversion of the user’s mail files showed that the conversion was successful. The old Mail directories were then removed. Then the system was ready to be delivered and placed back in operation following on-site testing to make sure nothing was amiss.

This article has had this many unique accesses:

click for free hit counter
get a free hit counter

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.