Linux: Bacula is for Everyone* (backup software)

* Well, almost everyone. If one just wants to backup a few files on random occasions then Bacula is not the software to use. But if one wants to run regular, scheduled backups to just about any type of storage media then Bacula will most definitely work.

I must admit, I have been a tar + cron Unix guy for over 20 years and never really considered anything else necessary for backups on Unix, until now. I recently decided to learn how to use Bacula to implement it for one of our clients that needs a new backup solution for their shiny, new PC systems and Linux server. The server is running Mandriva 2010.2 Linux with SAMBA and can easily handle adding Bacula to the mix. The PC systems are running Windows 7 Professional 64-bit, for which Bacula has a solution. During this process I have decided I can now add Bacula to my short list of "must have" Unix software for small, medium and large businesses.

In all honesty, I am still a Bacula novice. However, I am not a backup software novice and can already see, based on my slightly over two weeks of working with Bacula, that this is some excellent, well designed and well documented software. Bacula is also complex software and takes a willingness to study and learn before one can get one's mind around how it all works. Here is a PDF of a simple diagram I created based on my experience with Bacula for those who like to see graphics: Bacula Components

It can be daunting to begin working with Bacula if one is completely new to business backup systems, especially enterprise grade business backup systems. But with some study of the Bacula documentation, experimentation with several non-critical test backups and the Webmin (Warning!) Bacula module, the work to get several PC systems backed up on a regular schedule can be much easier. In my experience, it is easier than running something like Retrospect Express, a typical small business backup solution, on each PC.

Here is how it works on Linux in a nutshell. One installs an SQL database back-end, such as MySQL or PostgreSQL. Then one installs the Bacula components from one's distribution or downloads and compiles the Bacula components oneself. (The former method is recommended unless one needs to compile Bacula from source for some reason.) Then, one runs these commands to set up the Bacula database (In our system these are in /usr/lib/bacula and are symbolically linked to the actual script to run for the database chosen.):

  • create_bacula_database
  • make_bacula_tables
  • grant_bacula_privileges

One's Linux distribution may or may not run these for one. By default the database is password-less. One may or may not wish to add a password to the Bacula database. If one does, then the password needs to be used in the Director configuration file.

Then the configuration files need to be set up for one's system and LAN. The files one needs to edit are bacula-dir.conf, bacula-fd.conf, bacula-sd.conf, and bconsole.conf. (In our system these are in /etc/bacula). This can be a bit confusing at first, but experiment and keep reading the documentation. Eventually the way it works should "click" in one's mind. Since Bacula integrates all the components at the Director, once all the system configuration files are done one can then do all the work to create storage volumes, create backup jobs, and so on using the bconsole program at the command-line or the Webmin Bacula module in a web browser. We recommend Firefox.

Here are some example files from my working test setup here at the ERACC office.

File Daemon, bacula-fd.conf, on a PC to be backed up:

#
# List Directors who are permitted to contact this File daemon
#
Director {
  Name = router-dir
  Password = "BigSecretStuff"
}

#
# Restricted Director, used by tray-monitor to get the
#   status of the file daemon
#
Director {
  Name = era4-mon
  Password = "MySecretStuff"
  Monitor = yes
}

#
# "Global" File daemon configuration specifications
#
FileDaemon {                          # this is me
  Name = era4-fd
  FDport = 9102                  # where we listen for the director
  WorkingDirectory = /var/lib/bacula
  Pid Directory = /var/run
  Maximum Concurrent Jobs = 20
  FDAddress = 10.10.10.4
}

# Send all messages except skipped files back to Director
Messages {
  Name = Standard
  director = router-dir = all, !skipped, !restored
}

The passwords can be any text string one desires, including random characters, as long as they match when each daemon tries to contact one another.

Storage Daemon, bacula-sd.conf, on the system handling the storage media:

Storage {                             # definition of myself
  Name = router-sd
  SDport = 9103
  WorkingDirectory = /var/lib/bacula
  Pid Directory = "/var/run"
  Maximum Concurrent Jobs = 2
  SDAddress = 10.10.10.100
}

#
# List Directors who are permitted to contact Storage daemon
#
Director {
  Name = router-dir
  Password = "BigSecretStuff"
}

#
# Restricted Director, used by tray-monitor to get the
#   status of the storage daemon
#
Director {
  Name = router-mon
  Password = "OurSecretStuff"
  Monitor = yes
}

Device {
  Name = Data_r0
  Media Type = File
  Archive Device = /data_r0/bacula
  LabelMedia = yes;                   # lets Bacula label unlabeled media
  Random Access = Yes;
  AutomaticMount = yes;               # when device opened, read it
  RemovableMedia = no;
  AlwaysOpen = no;
}

#
# Send all messages to the Director,
# mount messages also are sent to the email address
#
Messages {
  Name = Standard
  director = router-dir = all
}

The Director configuration file, bacula-dir.conf, is rather large, so I will just post some of the parts that one needs to edit to get started.

The section of bacula-dir.conf that tells the Director about its own setup:

Director {                            # define myself
  Name = router-dir
  DIRport = 9101
  QueryFile = "/etc/bacula/scripts/query.sql"
  WorkingDirectory = /var/lib/bacula
  PidDirectory = "/var/run"
  Maximum Concurrent Jobs = 2
  Password = "BigSecretStuff"         # Console password
  Messages = Daemon
  DirAddress = 10.10.10.100
}

The Name should be unique.

The section of bacula-dir.conf where one will tell the Director the database password, if one set a database password. Otherwise, leave this section alone.

# Generic catalog service
Catalog {
  Name = MyCatalog
  dbname = "bacula"; dbuser = "bacula"; dbpassword = "dbSecretStuff"
}

Here is the bconsole.conf configuration file:

#
# Bacula User Agent (or Console) Configuration File
#

Director {
  Name = router-dir
  DIRport = 9101
  address = 10.10.10.100
  Password = "BigSecretStuff"
}

As stated near the beginning of this article, Bacula is well documented. One should be ready to spend some time reading documentation and looking at the configuration files before starting on a Bacula implementation. Once one does "get it" then using Bacula to backup one, dozens or hundreds of PC systems should be easy to understand and use.

Warning! We strongly recommend reading the documentation and learning how things work at the command-line before using Webmin! Webmin cannot substitute for lack of knowledge. (Go back.)

analytics
Powered by web analytics statistics.

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.

Linux: Using Remote Wakeup (Wake on LAN)

Here is the scenario, you are an independent IT consultant and/or an administrator of some business IT infrastructure. The systems you manage are a mix of Linux and Microsoft desktop and server systems. You do much of your system updates and other management tasks after hours using remote access over VNC or a VPN so you can be home with your family. The upper management, also know as “suits”, at your location has decided that shutting down PC desktop systems after hours is a great cost savings measure and tells you to implement a plan to do this. The suits also want the desktop PC systems to be up and running when employees arrive in the mornings so time is not wasted while people wait for their PC to start up. How do you do all this and still give yourself that time at home at night when you need to do those after hours management tasks? Enter Remote Wakeup, otherwise known as Wake on LAN.

I was presented with a challenge like this for one of our charity organization clients that is trying to cut costs as much as possible. The idea originally was that each PC will shut down each night after running its nightly backup. Then each morning the systems would be restarted by the users when they came in. Since most of the support provided by my company happens after hours this meant we needed to implement Wake on LAN so we could still provide that after hours support while giving the client the cost savings from shutting down the PC systems overnight. To do this we use the ‘wakeonlan’ Perl application with a mix of cron job and hand created scripts to wake up PC systems as needed. Each PC that needs to be awakened has its BIOS set with Wake on LAN enabled. One Linux PC system is set up as the “master” system for managing Wake on LAN and that PC is never shut down. That one PCs BIOS is configured to auto-restart that PC after a power outage so it will always be available.

Another hitch in this request is that some of the Windows desktop users do need access from home either after hours or over weekends. These people are completely unfamiliar with Linux and so need to be given an easy way to access the ‘wakeonlan’ capability of the Linux PC that handles this. This is accomplished by giving them PuTTY on Windows at home with a saved session that logs them into an account on the Linux PC over SSH. From there they just type ‘wol’ and are given a menu from which they can choose the PC they need to “wake up”. Here is a copy of the ‘wol’ script as it exists today:

#!/bin/bash
# wol script by ERA Computers & Consulting www.eracc.com
clear
wolmenu="wol.menu"
woldata="wol.data"
wolloc="`dirname \"$0\"`/"
if [ ! -f $wolloc$wolmenu ]
then
     echo Cannot find my menu file. My files should be in $wolloc.
     elif [ ! -f $wolloc$woldata ]
     then
          echo Cannot find my data file. My files should be in $wolloc.
     else
          cat $wolloc$wolmenu
          echo;echo Type the number of the PC to awaken or c to cancel and press Enter:
          read n
          case $n in
               c) exit;
               ;;
               C) exit;
               ;;
               *) echo Waking up `grep ^$n $wolloc$wolmenu`.;
               ipsubnet=`grep ^$n $wolloc$woldata|cut -d ' ' -f 3`;
               hwaddress=`grep ^$n $wolloc$woldata|cut -d ' ' -f 2`;
               echo The command running is - wakeonlan -i $ipsubnet $hwaddress;
               wakeonlan -i $ipsubnet $hwaddress;
               ;;
     esac
fi

There is more that could be done to check for invalid input and to check to see when a PC starts responding to pings, but this serves our needs just fine as it is written. The wol.menu and wol.data files contain the information needed to present the users with a selection and then to take the selection and send a wakeup signal to the selected hardware address on the LAN. Here is the menu structure:

Number PC Name             HW Address         IP Address
====== =================== ================== ================
1      ACCOUNTING          00:11:22:33:44:50  192.168.1.10
2      FINANCE             00:11:22:33:44:51  192.168.1.11
3      MANAGER             00:11:22:33:44:52  192.168.1.12

Here is the data file that corresponds with the menu:

1 00:11:22:33:44:50 192.168.1.255
2 00:11:22:33:44:51 192.168.1.255
3 00:11:22:33:44:52 192.168.1.255

Yes, I know we could grab the data direct from the menu using some tool other than ‘cut’. However, what is done here works even though it is not as elegant as some would like. If some of you with elite bash scripting skills would like to share how to do this with just the menu, please do so in a comment.

The one other item we need to address is waking up all the PC systems before the employees arrive in the morning on Monday through Friday each week. This is done in a cron job on the same Linux PC. Here is how the job might be set up:

30 6 * * 1-5 wakeonlan -i 192.168.1.255 -f /home/user/scripts/autowol.data

What this does is tell the cron scheduler to run the command “wakeonlan” at 6:30 AM (“30 6”) every Monday through Friday. (“1-5”) This command reads the hardware addresses from a file (“-f /home/user/scripts/autowol.data”) and then sends the wakeup signal to each address on the chosen subnet (“-i 192.168.1.255”). The hardware address data file looks like this:

00:11:22:33:44:50
00:11:22:33:44:51
00:11:22:33:44:52
00:11:22:33:44:53
00:11:22:33:44:54

It contains all the hardware addresses of the PC systems that need to be awakened at that time of day. One hardware address per line.

So, if you are an IT consultant, systems administrator or Joe User who wants to use Linux at home, perhaps this article gave you some idea of how to manage your own Remote Wakeup scenario.

free hit counter
free hit counter

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.

GNU/Linux Software I Use Regularly

I recently received an e-mail from a friend that has started using Ubuntu. He is rather new when it comes to running a GNU/Linux desktop and has asked me several questions. One of the questions was basically what software do I use and recommend. This is a serious question that a lot of new users will probably want to know.

Those of us who have been GNU/Linux desktop users for a long time take for granted the packages we install and use. As we have paid our dues to learn the ropes, one way we can help new users is to tell them what we use and make recommendations. It helps to have a base of software from which to start because there are so many choices under GNU/Linux a new user can easily become overwhelmed.

So, for my friend, and for all of you other new users out there, here is the software I use regularly.

My distribution of choice is Mandriva. Mandriva is a RPM based distribution and has several very well written tools to help one manage one’s desktop system. Since RPM is a requirement for Linux Standards Base (LSB) I prefer to stick with RPM based distributions. Mandriva was one of the first, if not the first, RPM based distribution to solve the “RPM dependency Hell” that so many encountered in the early days of RPM distributions.

My “desktop” runs a light window manager named fluxbox. I am not fond of Gnome nor KDE as they are too bloated to start with. Sure one can strip them down, but I would prefer to start light and add only what I want or need. Plus some of my friends I know that run Gnome and KDE do occasionally have broken desktops from trying to update them with the latest and greatest. Due to the complexity of the Desktop Environments (DE) like Gnome and KDE they can be a bear to try to upgrade. Especially for my friends that have jumped from an older primary version to a newer primary version like from KDE3 to KDE4. Just search the web and one can find story after story of upgrade PAIN going from KDE3 to KDE4. Due to upgrade problems under KDE one of my friends now says she has a new swear word, “KDE4”. With fluxbox I have never had such a problem and do not expect to ever have a broken “desktop” because of a fluxbox upgrade.

I monitor my system temperatures and fans with lm_sensors and the sensors krell in Gkrellm. Gkrellm also lets me see at a glance how much space is left on certain partitions I want to monitor. As well as showing me free RAM and other niceties like uptime and process usage.

I always have several xterm windows open to a bash command line. From these I can use dictd and the dict client to look up words and phrases from dictionaries I installed. Here is a little script I run from ‘root’ to install the dictionaries I want when I do a fresh install on new hardware:

#!/bin/bash
urpmi dictd-server dictd-utils dictd-client dictd-dicts-devils dictd-dicts-easton dictd-dicts-eng-fra dictd-dicts-foldoc dictd-dicts-fra-eng dictd-dicts-gazetteer dictd-dicts-gcide dictd-dicts-jargon dictd-dicts-vera dictd-dicts-web1913 dictd-dicts-wn dictd-dicts-world95

The urpmi command is one of those nice tools written for Mandriva that I mention. There are several urpm* commands one may use to manage software from the command line. Mandriva also has a nice GUI called ‘rpmdrake’ that one may run instead of command line versions. Both package systems allow one to search for packages. However, the command line urpm* tools do have a more robust search which can be combined with other command line tools to parse the output.

I use aiksaurus from the command line in one of the xterm windows for my Thesaurus. Here is some example output from aiksaurus:

aiksaurus newcomer
=== immigrant ================
arrival, arriviste, comer, emigrant, entrant, fledgling, greenhorn, immigrant, intruder, newcomer, outsider, parvenu, recruit, rookie, settler, squatter, tenderfoot, upstart, visitor

I believe there are GUI front ends available for both dictd and aiksaurus. But as I have never used them I will let others share about those in the comments.

I always have GNU Midnight Commander, mc, file manager running in one of the xterm windows. I prefer mc for most of my file management duties. It is lightweight and can run from a command line when one’s GUI has taken a nose dive. It is installed by default with Mandriva.

My web browsers, yes I use two regularly, are Firefox and Opera. I use Firefox primarily with Opera as my backup for rendering some broken sites that do not play well with Firefox. With Firefox I have NoScript as well as several other add-ons to block certain web annoyances that do annoy me. For example, I want to see Flash content only when I choose to see it. One of the Firefox add-ons is Flashblock. Flashblock will block Flash content but gives one a button to click to allow the content to run. This along with NoScript can really speed up access to certain sites that are rife with advertising screaming for one’s attention.

I use Kontact, yes it is a KDE application, which is a personal information manager that combines Kmail (e-mail), Knode (USENET news reader), calendar, contact manager, notes widget, ToDo list, Journal, and Akregator (RSS feed reader).

For instant messaging I use Kopete. Another KDE application. It allows me to contact friends, family and acquaintances on several instant messaging services including AIM, Jabber and Windows Live Messenger.

Xchat 2 is my IRC application of choice. I use it to connect to Freenode and a couple of other IRC networks to keep in touch with official project channels and support. Such as the #mandriva channel on Freenode for the times I need to ask a silly question instead of searching the web for the answer on my own.

My office suite is OpenOffice.org. I was pleasantly surprised recently to discover that OpenOffice.org Writer will now open WordPerfect 12 documents. With the contributions from IBM (I presume) it also will open my old Lotus WordPro documents. Naturally OpenOffice.org will open and edit Microsoft Word and Excel format files. When using Microsoft proprietary files I recommend saving them as Open Document Format (ODF) files whenever possible.

My financial management software is GNUcash. GNUcash does what I need to keep up with my personal finances and my small business finances. GNUcash does not have a “payroll” feature, yet. Since I do not need a payroll feature for my small business the ability to track accounts payable, accounts receivable and print professional looking invoices is enough for me.

I occasionally need to crop a picture or tweak a graphic for my web sites. My choice for that is The GNU Image Manipulation Program, a.k.a. The GIMP. I could not care less if The GIMP does not work like Adobe Photoshop. The GIMP does what I need it to do. All the graphics professionals that whine about needing Photoshop on GNU/Linux or they cannot use GNU/Linux miss the point of FOSS. They should get involved with The GIMP project and help add the features desired. If they cannot program they can at least test and provide feedback. In the end everyone wins with better The GIMP for all.

Those are the software packages I use most to Get Things Done. What about play time? I do have a few games I like when I need a break from reality. The games I play regularly are Wolfenstein Enemy Territory (3D FPS), Unreal Tournament 2004 (3D FPS), and Quake IV (3D FPS). These are three dimensional (3D) first person shooter (FPS), shoot ’em and blow ’em up games. I bought Unreal Tournament and Quake, but Wolfenstein Enemy Territory is “free”. When I feel less aggrieved with life I play around with Flight Gear (3D flight simulator) and TORCS (a 3D car racing game). All of these games run natively on GNU/Linux. I will only run games that run natively on GNU/Linux. I will even buy games that run natively on GNU/Linux. If a game does not run natively on GNU/Linux and requires WINE I won’t buy it nor will I “pirate” it to run it.

That is the list of software I use the most on my GNU/Linux PC. Feel free to share your own list of software in a comment.

This article has had this many unique views:

free hit counter code
free hit counter download

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.

Edit Fri Sep 11 11:16:54 CDT 2009: Clarify the line about FPS.

Opera on GNU/Linux – Moving an Account Reveals a Problem

I recently purchased the final part to build myself a new AMD Phenom Quad-core PC system to run Mandriva desktop GNU/Linux. I have been getting the parts a piece at at time over the past 12 months. I may go into the specifications of the new PC in a later article. For now I want to cover resolving a problem I had with Opera after my move to the new PC.

I create several accounts for myself on my desktop GNU/Linux system. Each account is used for a different purpose. Over the years I have ended up with two “personal” accounts. When moving to my new PC one of my goals was to consolidate the two separate “personal” user accounts for myself into one. This meant copying over settings and saved data from one of the accounts to the other account so as to not lose the settings. The applications in question were used only in one account on a regular basis so I did not have to worry about trying to merge settings. As I had been using this account I was copying for several years it would be a problem to recreate all the settings for all the applications I wanted to continue using. One of those applications for which I wanted to preserve the settings is Opera.

On GNU/Linux Opera stores its data in a hidden directory named .opera in each user account. Desktop GNU/Linux user accounts are typically under /home/username where username is the login name of each user. To preserve the settings I copied the .opera directory from the old account on the old PC using the shell file transfer capability in mc (GNU Midnight Commander file management utility) as root. I had to enable root ssh access on the old PC to do this. Giving remote ssh access to root is not recommended for regular day to day operations but is useful in situations like this. Then I ran chown -R newuser.newuser .opera on the copied directory to give it and the files below it the name and group of the new user.

The first time I ran Opera under the new account I received this popup error message:

Opera error - Store init failed

I do not use Opera Mail but obviously something “broke” when I copied the .opera directory. This error does not keep Opera from working but it is an irritant and I decided to find the problem and solve it. A quick perusal of a web search turned up information about this error but not anything that applied to my situation. Granted I only looked at three of the returned searches because I had an inspiration. I suddenly suspected that Opera uses hard coded paths in some of its files. I opened an xterm and ran grep -r “/home/oldusername” .opera which revealed my suspicions were correct.

The files that grep showed with hard coded paths are global.dat, opera6.adr, opera6.ini and pluginpath.ini. The fix is as simple as making sure Opera is closed, then opening each file in vim and using :%s/\/home\/oldusername/\/home\/newusername/g then 😡 to save and exit. Yes, there are ways to do this with awk, sed and the like but I don’t know those tools as well as I should (Sad, isn’t it?). But I do know vim so that is what I used. For the command line fearing user a GUI text editor like gedit or nedit would work as well. Some of the hard coded paths in pluginpath.ini pointed to directories that were only on the old PC so I just removed those. After editing these files I opened Opera and the error was gone.

Okay. I know some of you are itching to show your elite command line skills so feel free to provide your own solution to this problem with sed, awk or whatever favorite command line tools you use for these problems.

This article has had this many unique views:

free hit counter
hit counter download code

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.

Edit Sun Aug 16 12:07:39 CDT 2009: Repair a poorly worded sentence.

How to Kill a Linux/Unix System and Live to Tell the Tale

I am about to admit, in public, to doing something extremely boneheaded once. Why? I just finished reading Carla Schroder‘s article at LinuxPlanet titled Linux Works Even When Your PC is Committing Suicide. This article reminded me of my past foible and I decided to share it with you to make a point.

I have been a “Unix guy” for many years now. I started with SCO Xenix back in the late 1980’s installing and supporting SMB companies that needed Point of Sale and business accounting systems. Typically this would entail running Xenix on a server and dumb terminals at each desk or Point of Sale seat. From SCO Xenix these migrated to SCO OpenServer and SCO Unixware. Sometime in the late 1990’s I built my own in-house server to run SCO Unixware at my office as a file server. This worked quite well and did not cost me too much as an SCO reseller. I was able to get Not For Resale (NFR) copies of SCO products at a steep discount.

In the early 2000’s I was running IBM OS/2 on my desktops, one multi-boot PC with IBM OS/2 + Caldera Linux + Microsoft Windows 98 to run Quickbooks and the SCO server where all my company data was stored. After installing some updates on the SCO server I decided to go into the /etc directory where all system configuration files are stored and remove some old directories that were no longer needed. I had to login as root (system administrator) to do this. While in the /etc directory I issued this command:

rm -rf olddirectory/ *

Notice there is a space between the slash “/” and the asterisk “*”. I had pressed enter and eventually realized it was taking longer than I expected to remove what I thought I was removing, should have been about 2 seconds. By the time I understood my error and canceled the command almost all of /etc was gone. For you command line novices I will explain what I did wrong in the command used. I basically told the remove command, “rm”, to delete everything recursively in both “olddirectory/” and the current working directory by adding that unintended space after the slash followed by that asterisk. Yes, that is A Very Bad Thing as the /etc directory tree is a critical system directory under Unix and GNU/Linux.

Did this kill the system? Well, yes and no. As long as I did not reboot the system I was okay. Had I made the mistake of rebooting the system I would have had more problems recovering than I cared to have. The Unix system happily churned along with an empty /etc directory and I was able to copy all the company data off the server to a PC on the LAN, including the Quickbooks file that had all the accounting data in it. Then I could decide on staying with SCO, which was starting to make noises against GNU/Linux, or move to something else. I decided to move to FreeBSD on the file server and did so.

Eventually I migrated all the company desktops to GNU/Linux (Because IBM decided to “kill” OS/2, which is still not dead yet, and I wanted to future proof my systems.), began using GNUCash in place of Quickbooks (I don’t need payroll software for my small, family business.) and kept FreeBSD on the file server. I have been and am content with this change and will stay with this setup for the foreseeable future.

I consider what would have happened should I have deleted a critical system directory tree under some other operating system. I have serious doubts the other operating system server versions would have kept running for me to copy data easily over the LAN. I have no doubt whatsoever that a GNU/Linux system would fare just as well as that venerable SCO Unixware system I killed. To me this illustrates the high robustness of all Unix-like operating systems and just another reason why I will continue to use them.

This article has been accessed this many times:

link to vocational schools
vocational schools guide

Edit Thu Apr 23 09:24:43 CDT 2009: Oops! Somehow the setting requiring users to be registered and logged in to comment was on in our WordPress settings. Of course no one here knows how that happened and I know I did not check that box recently. Anyway, it is back to anyone can comment now. I apologize for any inconvenience this may have caused our readers. Gene

Two Reasons the Command Line Trumps the Graphical User Interface

My inspiration for this article came from reading Akkana Peck’s Intro to Shell Programming: Writing a Simple Web Gallery at LinuxPlanet today.

Before I get into this I will state for the record I am not a text mode Luddite. I use a graphical user interface (GUI) every day. In fact I am using the fluxbox window manager GUI as I write this article with a WordPress GUI and Firefox GUI. I like my GUI chewy goodness as much as any visually stimulated human. However, for certain tasks a GUI is just not the best choice.

The first reason is twofold, quickness and convenience. I will use for this point GNU/Linux distribution software installation and removal. If one has one’s distribution repositories set up, knows the application one wants to install and knows the command line string to use for installation on one’s GNU/Linux distribution of choice then installation is much faster at the command line. For example I want to install K9Copy, a DVD duplication application not included or installed by default on my Mandriva Linux system and included in the Penguin Liberation Front (PLF) third party repositories for Mandriva. From the GUI installer under KDE I have to use the following steps.

  • Click the Menu button.
  • Click “Install & Remove Software”.
  • Provide the administrator (root) password.
  • Wait for the user interface to load …
  • Wait for the user interface to load …
  • Wait for the user interface to load … finally!
  • Click File > Update media
    Because I want to make sure I have the latest repository updates.
  • Wait for the repository database to be updated.
  • Type k9copy in the search bar.
  • Click the check box beside K9Copy.
  • Click the Apply button.
  • Wait for the application installation confirmation dialog.
    Dangit! I already said to do this once, now I have to say do it once more.
  • Click the Yes button (It is okay to continue, stupid GUI).
  • Finally get the application to install.
  • Wait for the GUI to reset after the install.
  • Close the GUI.

Doing this set of actions can take several minutes. On the other hand I can switch from my GUI to a console login with Ctrl+Alt+F1, login as the administrator (root) and type this at the command line prompt:

urpmi.update -a && urpmi k9copy

Then switch back to my GUI with Ctrl+Alt+F7 and conveniently continue typing this article while the program installs. The urpmi.update -a command tells my installer to update its sources. The && tells the shell to do the next thing only after the first one completes. The urpmi k9copy tells my installer to install that application. The Mandriva urpm* tools are smart enough to know that k9copy is k9copy-1.2.3-1plf2008.1.i586.rpm. All this will run in the background while I get stuff done. Now that I have finished this paragraph I can switch back to the console with Ctrl+Alt+F1 and exit from the administrator session.

The second reason the command line trumps the GUI is repetitive tasks. I could illustrate this here with a clever shell script. However, I think I will refer to Akkana Peck’s article I mention at the beginning of this article. Go read it if you have not. In summary Akkana shows how to use a shell script loop to modify a directory full of JEPG files with two of the ImageMagick command line strings. While one could do this with a GUI like The GIMP I would only recommend doing it with a very few files. If one needs to modify a few hundred graphics to be a standard size for a web site gallery then the command line tools Akkana shows how to use are going to save the day.

I have seen all the arguments that Joe Sixpack could not care less about a command line. That is absolutely fine since Mr. Sixpack is more than likely only wanting to browse the web, play a few games, send and receive e-mail and work on his genealogy. All these can be done in GNU/Linux just fine without ever needing to see a command line. However, should Mr. Sixpack ever need to create a family web gallery for the Sixpack family using a few hundred digital photographs from a few dozen different cameras he will have a big task on his hands. Then maybe, just maybe he will see Akkana Peck’s article and find out an easy way to get all those pictures the right size for his gallery using the much maligned command line. I am certain our friend Mr. Sixpack will be very happy to see that command line example from Akkana if he ever needs it.

Please feel free to comment and provide some of your favorite time saving or repetitive command line tasks.

This article has been accessed this many times:

link to mba-online-program.com
mba programs online

Edit: as of Fri Mar 13 17:31:30 UTC 2009 the route to hit-counter-download.com is not working. If the page seems to be taking a long time to finish loading this is why. Hopefully it will clear up soon.

Backup with growisofs

I wrote an article in 2006 about making backups with growisofs on DVD+RW media. That article is Using growisofs with DVD?RW Drives for Backups on our main web site. Yesterday, 2008 December 31, I created a new addendum that details using tar to create a file that then gets backed up using growisofs. If one has seen the article in the past now would be a good time to revisit it and check out the new information.

Since this is my first article of 2009 on The ERACC Web Log, have a great New Year everyone.

This article has been accessed this many times:

hit-counter-download.com
Get a free hit counter here.

Managing Pesky NFS Mounts With A Shell Script

I have used NFS[1] mounts in our SOHO for many years for personal and business storage of files that need to be accessed from more than one PC. The file systems in /mnt are symbolically liked to /home/user directories and called mounts here. This makes accessing them very easy from each user account on a Linux PC. This is great when it works, not so great when one of the shares goes belly up. When one NFS share stops responding new access to all connected file systems in /mnt is likely to be nonexistent and cause applications trying to access the /home/user/mounts directory to hang. I have been manually removing each mounted NFS share with umount -f and then restarting them one at a time with mount over the years. I know, I know, I know – I should have studied the man(ual)[2] pages instead of doing that.

Today I did study the man pages for mount and umount because I was tired of manually dismounting and remounting all these shares periodically when one of them died and blocked my access to all my mounted file systems in /mnt. After all, I thought, there are likely situations where locations have dozens of mounted NFS shares and need to reset one or more. This has to have been addressed with a way to handle multiple shares with a single command. In other words, I figured there had to be a better way and I was correct.

What I found were umount -a -f -t nfs and mount -a -t nfs which do in a couple of commands what I had been doing tediously by hand one at a time for several NFS shares. Now, I am aware that this is a shotgun approach that does not take into account open files on these shares. On a multiuser system with multiple NFS shares on other systems one would need to be more aware of and cautious of others that may be actively using NFS shares. But in my case these are connections for personal desktop use and I can kill them off without worry that I will affect someone other than myself.

I still wanted to reduce what I had to type at the CLI (Command Line Interface) to do this so I wrote a script that I placed in /root/bin to run at need. Here is the script:

#!/bin/bash
echo "Remount remote NFS shares."
echo "Forcing umount of remote NFS shares ..."
umount -a -f -t nfs
mount|grep "type nfs"
echo "Sleeping 5 seconds ...";sleep 5;echo "Mounting remote NFS shares ..."
mount -a -t nfs
echo "Listing remote NFS shares ..."
mount|grep "type nfs"

Study that and see if you can figure out what it does by looking at the man pages for mount and umount. I named this remountnfs and I ran chmod 750 /root/bin/remountnfs to make it executable from the CLI. Now all I need to do to reset those nasty hung NFS shares is type su -c /root/bin/remountnfs at the CLI and supply my root password to reset those shares and make /mnt accessible once more.

  • [1] If you have no idea what NFS happens to be then read What is NFS from the Linux NFS-HOWTO.
  • [2] Anyone that really wants to be a Linux Power User should learn to study and decipher man pages. Start by studying man man and go from there. Almost every command one can type at the CLI has a corresponding man page.

Have a suggestion as to how I could do this better? Leave a comment.