Open Source: A GUI Minimalist Tries e17

… and likes it … well, mostly. Hopefully this article will help you if you are searching for that “just right for me” GUI on your Unix/Linux system. But be sure to give e17 a try yourself, do not just pass it over because of anything you may read on the web.

For years now I have been a die-hard GUI minimalist relying on light window managers / desktops such as fluxbox, Window Maker and XFCE4. As such, I was dismayed when the Mandriva Linux distribution decided to drop official support for all GUI options other than its in-house “ROSA” interface. When I found that information and added it to the other disappointing news coming from Mandriva I decided to move on along to Mageia Linux. After that move was completed I decided to take a look at the other desktop options available. While looking at these I saw e17 listed. This is e17 version 0.16.999.55225 for those of you who want to know version numbers. I had seen e17 listed in the package management system for Mandriva in the past, installed it and taken a brief look at it. But I had never decided to give it a real try. This time, I decided to use e17 for at least a month and not use any other desktop or window manager. That was around the middle of September 2011, it is now into the second week of December 2011 and I am still using e17 … for now. Following are my impressions, likes and dislikes regarding e17 so far:

e17 Impressions

  • Appears to take very few resources, which appeals to my GUI minimalist mind.
  • Seems a bit rough around the edges in a few places.
    • Auto-hide of the shelves stops working sometimes. Have to open the settings dialog for a shelf and save it again to “fix” this.
    • At times the Taskbar gadget running in a shelf mishandles / overlaps / truncates the items that it shows running on a desktop. Opening the shelf settings dialog and saving again “fixes” this.

e17 Likes

  • One can configure keyboard shortcuts for pretty much anything.
  • One can change the mouse context menus to match what one prefers, such as right click the desktop for the Main menu.
  • One can quickly switch workspaces just using ALT + Fn keys.
  • One can have multiple desktop background images on a per-workspace basis.
  • One can have multiple shelf objects and have them auto-hide to maximize the usable screen area.
  • All the e17 settings appear to be saved in the ~/.e directory. Making it easy to backup and restore custom settings.
  • Does not start out with a lot of garbage add-ons loaded that one then has to disable or remove to get a streamlined GUI.

e17 Dislikes

  • Binary configuration files?! What?!
    (This will be the reason I end up eventually abandoning e17. I truly loathe the use of binary configuration files under unix-like systems. That is just wrong. Use of plain text for configuration files and log files is one of the primary reasons I love unix-like systems as well as fluxbox, Window Maker and XFCE4. Being able to repair problems or tweak settings by hand in a plain text file from a command line is a big plus for me.)
  • Apparently e17 handles windows so differently from fluxbox, XFCE4, Window Maker, KDE and GNOME that my favorite screen capture tool, Shutter, cannot find windows of which to take snapshots. There is a screen capture application included for e17, but it has fewer options and functions than Shutter.
  • When my X sessions kept crashing due to a bad graphics card recently, e17 lost all my custom settings after one such crash. While this can be recovered from a backup of one’s /home/user directory while at a command line without e17 running, this is at minimum very annoying. Of course loss of all settings happened to me when I used some KDE4 applications for a while too. So this is not just an e17 problem.

As a GUI minimalist my concerns are not glitz, glitter, bells and whistles. I want function over form. If the GUI does what I want it to do with as little memory, CPU and GPU use as possible, which means less power consumption, then I could not care less about transparency, wobbly windows and fire burning up my closing windows. If the GUI is pretty as well, that is just a bonus.

Since Mageia did include e17 0.16.999.55225, which is a “work in progress” release, it is possible to probable that some of the problems I note here have been fixed in subsequent releases. The latest “snapshot” release of e17 as of the time if this article is 0.16.999.65643. As I do not go outside my distribution’s package management system for anything other than a few games I will just have to wait for Mageia to catch up with the latest release of e17 to see what is fixed and what is not.

For the most part, I am favorably impressed with e17. However, the use of binary configuration files is a serious enough personal problem for me that I will eventually move back to one of my favorite GUIs. Of course if the enlightenment desktop team changes course and begins to use plain text files to store configuration data, well, I may just decide to add enlightenment desktop to my small list of favorite GUIs. I am not going to hold my breath waiting for that though.

Custom PC from ERACC   Custom Notebook from ERACC

Security: Linux, OS X, Unix and Malware (Viruses)

I recently had the opportunity to look into the anti-malware world of Apple OS X. One of our clients moved to a new office in late October 2011. As part of this move they also moved from Microsoft operating systems and software to Apple OS X systems and software, making a clean break from all things Microsoft. While researching their question about anti-malware for OS X I found that the world of anti-malware for OS X is just as fraught with information and disinformation from Apple fans, Apple opponents and anti-malware vendors as the world of Linux seems to be at times with its fans and detractors. I came to the following conclusion which is paraphrased and expanded from the e-mail I sent our client.

After a lot of research over the past month I have come to the conclusion that costly Unix, OS X and Linux anti-malware programs, such as Norton anti-virus on OS X, are a waste of money. It is not that unix-like systems are invulnerable to attack, but that the types of attacks I have seen mentioned over this month will get right through most anti-malware software on systems that are vulnerable. All these anti-malware solutions seem able to do is protect your Microsoft using friends and clients to whom you might forward an infected e-mail sent to you from someone else using an infected Microsoft Windows system.

The fact that Microsoft software is perceived to have the largest installed base does mean there are many more attacks against Microsoft systems on the desktop space. But, just because a system is highly targeted does not mean it can be successfully targeted. The flawed core design of all Microsoft operating systems, desktop and server, means more successful attacks. The “designed with security in mind” unix-like systems are much less likely to experience a successful attack on the desktop or the server. This is not to say they will never be targeted, they will. Just that the incidents of successful attacks are likely to be much lower than that of Microsoft’s systems. Of course, any desktop system that has a user interacting with it can be successfully attacked through social engineering. User education is the only solution to social engineering attacks.

Unix-like systems are typically not susceptible to traditional Viruses such as those found on Microsoft Windows. However, they can be susceptible to social engineering, Worm and Trojan Horse attacks. Here are some basic definitions:

Virus – Self-replicating malware that attaches to executable files. The infected program has to be run for the virus to spread. Typically viruses will seek out system programs that will start when the system starts. Or they will seek out specific software that is ubiquitous across the platform on which the virus is designed to attack.

Worm  – Self-spreading malware that attacks un-patched, vulnerable “services” on networks. These usually are a “rootkit” running on an infected system that will attack print servers, name servers, web servers, file servers, and the like. All modern desktop operating systems are likely to be running a service of some sort. Server systems will definitely be running some services. These malware only succeed if the service is vulnerable due to not being patched or up to date. A successful attack will then install itself as a new “rootkit” on the infected system and start scanning a network for more vulnerable systems it can successfully attack from the newly infected system.

Trojan Horse – Embedded malware that requires the end-user to install an application that is pretending to be something it is not. This is the most prevalent attack these days. In some cases it preys on a users ignorance by using social engineering to get the user to install malware. Fake anti-virus pop-up messages from infected web sites are the most common infection vector. In other cases deliberately infected source install files from “strange” web sites is the other attack vector. A Trojan Horse may include a virus, a worm or both.

For more detail see this: The Difference Between a Computer Virus, Worm and Trojan Horse

All anti-virus software is retroactive. In other words, for all of these, the anti-virus software has to already “know” about specific malware to be able to prevent it. In the event of web site pop-up malware, most, if not all, anti-virus software will happily let one install the software that is masquerading as anti-virus software. In some cases the end-user might get a warning that the software is malicious. But often no warning is given, even with web scanning enabled in the anti-virus software. Then the uneducated, gullible user clicks on the dialog and essentially, but unknowingly, agrees to infect the system.

Here is one example of a successful Trojan Horse web based attack on OS X systems:

Mac OS X Viruses: How to Remove and Prevent the Mac Protector Malware

If you are new to unix-like systems and have not yet purchased anti-malware for your Linux, Unix or OS X, here is what I recommend. Do not buy anti-malware for unix-like systems as you will likely just be wasting your money. Instead, use a “free” anti-malware package if you feel you must have one. Keep your unix-like systems up to date. Then educate yourself, your friends, your acquaintances and your employees about these web based Trojan Horse attacks. Think long and hard before you download and install any “strange” software from “strange” web sites. Ultimately the human mind considering the pop-up message on the screen or the program download from a “strange” site is the last line of defense against these attacks. An educated, cautious mind is the best defense against a social engineering malware attack.

Finally, we all know there are some people for whom no amount of training will suffice. Not all brains are equal in their ability to process and store information. These folk may listen politely to explanations about malware, nod that they understand, then go ahead and click the [Install] button on a malicious pop-up dialog and follow the instructions to completion. There are also people who refuse to learn anything they perceive to be “inconvenient”. They remain willfully ignorant of malware threats as a result. These people will always be the weak link in the chain when attempting to protect personal and business systems from malware. The malware clean-up businesses will still have a strong future with these people using computers, no matter what operating system they use.

Custom PC from ERACC   Custom Notebook from ERACC

Open Source: When Updates are NOT the Problem

I recently had a fun experience. My Mageia 1 Linux system seemed to be experiencing hard lockups requiring a push of the reset button to “resolve”. By “hard”, I mean no keyboard input, no program updates showing in X and sometimes no ping response from another PC on the LAN. I had run some updates, including a new kernel update, and these lockups appeared after running the updates. Cause and effect. Yes? Well, no. It turned out I was having a hardware problem. Here is how I figured that out.

All was well with the world and my PC on Friday, 11 November 2011. Okay, maybe not with the world, but my PC was humming along just fine. Then Saturday came along and it was Update Day. Since my PC is my business system as well as my personal system, I usually try to run my updates on the weekends to avoid down-time during the week. The updates completed successfully and these packages were updated:

flash-player-plugin-11.1.102.55-1.mga1.nonfree Sat 12 Nov 2011 04:16:02 PM CST
libmsn0.3-4.1-5.1.mga1                        Sat 12 Nov 2011 04:15:52 PM CST
kernel-source-latest-2.6.38.8-8.mga1          Sat 12 Nov 2011 04:15:51 PM CST
kernel-desktop-latest-2.6.38.8-8.mga1         Sat 12 Nov 2011 04:15:51 PM CST
kernel-desktop-devel-latest-2.6.38.8-8.mga1   Sat 12 Nov 2011 04:15:50 PM CST
kernel-desktop-devel-2.6.38.8-8.mga-1-1.mga1  Sat 12 Nov 2011 04:15:40 PM CST
kernel-desktop-2.6.38.8-8.mga-1-1.mga1        Sat 12 Nov 2011 04:14:47 PM CST
kernel-source-2.6.38.8-8.mga-1-1.mga1         Sat 12 Nov 2011 04:14:31 PM CST

(Output from ‘rpm -qa –last |less’)

Of course, after getting a kernel update a reboot is required to load the new kernel. So I rebooted the system and everything seemed to be working fine. Sunday evening came along and I decided to play a bit of Unreal Tournament 2004, a.k.a. UT2004, and frag some bots for relaxation. Yeah, I am a “violent guy”, NOT. I was into the middle of a Capture the Flag run when my game froze hard. What was my first thought? Yup, you guessed it, “That kernel update has messed up my box!” It is sad how we humans jump to the wrong conclusions so quickly, is it not? How many folks would be happier if we all took a step back from our assumptions and reconsidered before acting? Not only that, jumping to the wrong conclusion in my case turned into a week of unnecessary frustration and angst.

I spent several wasted hours every day looking for a solution to my “kernel problem”. You may wonder why I started hunting for kernel problems. I was using an nVidia based graphics adapter running the non-free nVidia driver supplied with my distribution. During forensics after my first hang I saw this in /var/log/messages:

kernel: NVRM: os_schedule: Attempted to yield the CPU while in atomic or interrupt context

I started thinking the new kernel and nVidia driver Had A Problem and based on that poor assumption I forged ahead to find The Solution To The Problem. I will spare you the details of all the wrong turns and dead-ends I found during my week of agony. Let us just say, it was not fun trying to fix a problem that did not exist anywhere else but in my fevered imagination.

What happened to get me on the correct track? Yesterday, 22 November 2011, my system suddenly started hanging when not running a 3D game. I was just looking up parts to order for a new PC build for one of our Linux clients. This was the “Ah ha!” moment for me. I had not had this happen before under any Linux system except when I had a hardware problem. So, I shut down the PC, pulled my CD case out of my brief-case and loaded the Live Parted Magic CD to run hardware tests. The RAM was tested first – no errors. Then I rebooted to the PM GUI and ran GSmartControl disk tests simultaneously on my four SATA drives.

The first drive finished with no errors. While waiting for the other drives to finish, Parted Magic had a hard hang. My very next thought was, “It is the graphics card!” Because the only time I have had Parted Magic hang like that was when I encountered a bad graphics card on a client’s PC I was trying to diagnose. Yes, this is another bit of assumption. But this time it was correct.

However, I wanted to be sure I was following the correct path this time. So I did some more forensic investigation before marching ahead. What I did was reboot my PC off the boot drive to Mageia 1. Login to my console on my Linux router and ‘ssh’ in to my PC. Then ‘su’ to root and ‘tail -F /var/log/messages’ to watch what was happening while I used the PC. In a few minutes of use the PC “froze” and these were the last lines displayed in the log:

Nov 22 21:41:33 era4 kernel: NVRM: Xid (0000:01:00): 8, Channel 00000000
Nov 22 21:41:35 era4 kernel: NVRM: os_schedule: Attempted to yield the CPU while in atomic or interrupt context
Nov 22 21:41:37 era4 kernel: NVRM: os_schedule: Attempted to yield the CPU while in atomic or interrupt context
Nov 22 21:41:42 era4 kernel: NVRM: Xid (0000:01:00): 16, Head 00000000 Count 00017c0c
Nov 22 21:42:09 era4 kernel: NVRM: Xid (0000:01:00): 8, Channel 00000020
Nov 22 21:42:26 era4 kernel: NVRM: Xid (0000:01:00): 8, Channel 00000020
Nov 22 21:42:41 era4 kernel: NVRM: Xid (0000:01:00): 16, Head 00000000 Count 00017c0d
Nov 22 21:42:50 era4 kernel: NVRM: Xid (0000:01:00): 8, Channel 00000020
Nov 22 21:42:59 era4 kernel: NVRM: Xid (0000:01:00): 8, Channel 00000020
Nov 22 21:42:59 era4 kernel: NVRM: Xid (0000:01:00): 16, Head 00000000 Count 00017c0e
Nov 22 21:43:13 era4 kernel: NVRM: GPU at 0000:01:00.0 has fallen off the bus.

See that last line? That means the driver could not “see” my video card any longer. I think the hard lockups were because the non-free, proprietary nVidia 3D driver has hooks into the kernel to do its “3D magic”. It can cause a video failure to hang the entire system. If there is a way to do fast 3D processing under X without hooking into the Linux kernel, well, I vote for that. Why? So that an ‘ssh’ into a desktop Linux box with a dead video card has a chance of being successful so a savvy troubleshooter has a chance to do forensics on the running system. In any case, a switch to a new video card solved the problem.

My new video card? It is an “old”, unused ATI Radeon X1650 Pro I had sitting on a shelf here. It is using the “free” ATI driver supplied with my distribution. Oh, and my 3D game, Unreal Tournament 2004, works just fine with that. However, I have not gotten Quake 4 to run yet with the “new” setup. I expect I will be able to get Quake 4 working if I decide to take the time to look into that. But for now, I am happy with what I have. At least I can get my work done, which is much more important than any game.

Now, if you will excuse me, I need to get back to building Linux based systems for our clients. Thanks for stopping by though.

Custom PC from ERACC   Custom Notebook from ERACC

Open Source: Why Military Forces Should Use Linux

Why? Because the level of skill required to crack a Unix-like OS is much higher than that needed for a Microsoft OS. Further, properly configured Unix-like systems are much more robust than Microsoft systems. Were Military forces using properly configured and properly secured Unix or Linux systems we would not see items like these below being reported.

I just had a, “What were they thinking?!”, moment while reading this article at ars technica: Computer virus hits US Predator and Reaper drone fleet. First, it is not a “computer virus”, it is a Microsoft operating system virus. Second, using Microsoft operating systems for any critical Military computer systems is just wrong. I know the US Military has specifications for rugged computer systems that must be made in the USA. That makes sense. What does not make sense is the fact that the US Military will accept Microsoft operating systems on its critical, sensitive hardware at this date in time. That is like specifying a bank vault that can withstand a nearby nuclear blast, but allowing the builder to install a screen door for access to the vault. It is just a Bad Idea!

This was a deja vu moment as well. I was following news about Military systems back in the 1990’s and had a similar experience when I read about the US Navy “smart ship” running Microsoft Windows NT … and having a ship killing system failure: Software glitches leave Navy Smart Ship dead in the water. I completely agreed with Ron Redman, deputy technical director of the Fleet Introduction Division of the Aegis Program Executive Office, at the time when he stated:

“Unix is a better system for control of equipment and machinery, whereas NT is a better system for the transfer of information and data. NT has never been fully refined and there are times when we have had shutdowns that resulted from NT.” … and … “Because of politics, some things are being forced on us that without political pressure we might not do, like Windows NT,” Redman said. “If it were up to me I probably would not have used Windows NT in this particular application. If we used Unix, we would have a system that has less of a tendency to go down.”

Actually, after re-reading that, I disagree that NT, or any Microsoft OS, was or is “a better system for the transfer of information and data” when compared to a Unix-like OS. I would use Linux for that too. Especially in a critical Military system like a “smart ship” or a drone control center. Frankly I do use Linux for operational security and the secure transfer of information and data in my own small business. I thank God that I do not have to succumb to political pressure forcing me to use a Microsoft OS for my business. It seems to me, if I can figure out how to implement Linux for my personal and business use, surely the US Military can do the same for its critical systems infrastructure. Obviously some people in the Military “get it” when it comes down to what system is best for critical control systems. Now if only the Microsoft lobbyists can be shut down from affecting the decisions as to what systems are best for the US Military.

Microsoft still makes a decent gaming operating system. But that is about the sum total for which I would agree a Microsoft system should be used. Even there I am agreeing reluctantly only because the majority of current PC game development targets the Microsoft OS.

Hey, US Military folk and US Senators with military oversight, if it has to be from the USA, ever hear of Red Hat Linux? How about the US NSA’s own Security-Enhanced Linux? Perhaps it is time for you folk to rethink the requirements for Military computing systems and make one of these Linux operating systems part of the requirement. Or take the Linux kernel source code and use your own internal Military IT staff and programmers to collaborate and build a custom system just for Military use. Any of these would be a better option than relying on a “known to be owned” OS like any of those from Microsoft. I will be glad to introduce you to Linux if you want to pay me for a Linux consultation. Just sayin’ …

Custom PC from ERACC   Custom Notebook from ERACC

Discuss this article at:

Edit Sat Oct  8 20:57:30 CDT 2011: Due to a salient observation elsewhere, change “pwn” to crack in the first paragraph.

Open Source: Niche Markets, Linux and Microsoft

If you are a Linux protagonist who has been around as long as, or longer than, I have, you have seen responses like these over and over as to why Linux distributions will never go mainstream on the PC desktop:

  • “Linux will always remain a niche platform because it does not have a native release of Adobe (Photoshop / Creative Suite / etcetera)!”
  • “Linux does not have Microsoft Office and Microsoft Office power users require Microsoft Office!”
  • “The web portal at (insert portal here) needs Internet Explorer. There is no native release of Internet Explorer for Linux, so no one will want to use Linux!”
  • Program X does not have a Linux version or equivalent!”
  • Or other claims along the same lines …

Yes, these comments usually do have exclamation points to show how emphatic the claimant feels about the statement. I think these claimants have the equation backwards. All of these cases are what is known as a “niche market”. How many people using PC systems need to use Adobe Photoshop? How many Microsoft Office users are a “Microsoft Office power user”? How many end-users of a PC system need to go to a web portal that requires Microsoft Internet Explorer? (I will ignore the fact that many of these “IE only” web portals usually work just fine if one fakes the browser string with Firefox or Opera.) How many people need to use Program X on their PC?  I am thinking, “Not that many.”, for all the above.

To me this suggests that the Microsoft platform is the niche platform:

  • Do you “need” Adobe (Photoshop / Creative Suite / etcetera) for your job? Then you are a niche user.
  • Do you “need” Microsoft Office because you are a “power user”? Then you are a niche user.
  • Do you “need” access to an IE only web portal? Then you are a niche user.
  • Do you “need” to run Program X on your PC? Then you are a niche user.

The vast majority of PC users do not need, or want, any of the programs that are often claimed to be the problem holding back adoption of Linux on the PC desktop in the mainstream. In my experience with the few end-users I have switched from Microsoft to Linux, some of them did have special needs that precluded using Linux on their desktop PC at this time. The others have zero problems using a Linux desktop PC.

These latter are people that do not try to solve PC problems themselves. They call a “computer guy” when they have problems. They would call a “computer guy” even if they ran Microsoft systems and had a problem. They have no “need” for any of the niche usage scenarios above. They are perfectly content that they can send and receive e-mail, access FaceBook, play Flash games, browse web sites, use personal finance software and make a simple spreadsheet with LibreOffice. All from their Linux based desktop PC.

One of these Linux desktop users is also a Skype user and there are “millions” of Skype users “out there”. Skype usage is less of a niche market than it used to be. That is going to be problematic once Microsoft kills Skype development for other platforms in favor of its own software now that Microsoft owns Skype. The “embrace, extend and extinguish” paradigm is still Microsoft’s bread and butter. But if Microsoft does what I suspect, Skype will end up being merged into some Microsoft based software. At that point our smart FOSS developers will likely figure out a way to inter-operate with the Microsoft software from FOSS programs. However, this “problem” would be non-existent if end-users were aware of and used FOSS communication projects like Ekiga.

So, that said, how do we get from where we are to the mainstream desktop?

The “problem” with adoption of Linux on the end-user desktop is not these niche usage scenarios. As I see it Linux adoption is a fourfold problem, apathy, education, marketing and pre-loading agreements.

  • Apathy – Okay, there is not much we can do about this one. If an end-user is apathetic about what operating system is on his or her PC just let it go.
  • Education – There are still many people who have no idea what Linux is or can do for them. I still meet people who have not even heard the term Linux. When I can, I give them a brief overview of what Linux is and then give them a Live CD distribution to play with. Those of us who are Linux professionals can take the opportunity to present Linux systems at local Chamber of Commerce gatherings and local technology shows.
  • Marketing – There is no one company marketing Linux to the masses on a large scale. We will see no advertisement on television or in print from an “Apple” that offers an alternative to Microsoft. Most of the “Linux Big Boys” are only marketing to businesses. Actually I think this should be one of the jobs of The Linux Foundation. But until that organization takes on major advertising, we can use local media and continue to use positive “word of mouth advertising” to “market” Linux.
  • Pre-loading Agreements – Microsoft has pretty much sewn up the pre-load venue with major PC manufacturers. Sure, some of these manufacturers give a slight nod to Linux and offer a few systems with Linux pre-loaded. But I am not content with the puny offerings from these major manufacturers. (Of course since my company builds custom systems with Linux pre-loaded this should come as no surprise to our regular readers.) I do not expect this to change any time soon. So, no consumers are likely to see a Linux based PC from HP, Dell, etcetera on the shelves at Best Buy. The only way I see to overcome this at this point is with education and marketing. If we can create a demand for Linux systems like Apple has done for Apple systems, the end result will be Linux systems on the shelves at major retail outlets.

There are people who should stick with Microsoft or Apple systems for their niche usage. For the rest of the PC user base, Linux on the desktop is ready to go.

Custom Notebook from ERACC Custom PC from ERACC

Discuss this article at:

Open Source: Live Migration of Mandriva to Mageia

Are you in the market for a new laptop, desktop or server PC with Linux installed? Please give us the opportunity to quote a preloaded Linux laptop, desktop or server system for you.

I took the plunge to migrate my personal / business desktop PC from Mandriva 2010.2 to Mageia 1 today (Sunday, 4 September 2011). I used the instructions from this page: Migrate from Mandriva Linux. Specifically the section titled, “b) Upgrading inline, using urpmi (CLI)”. The migration is roughly three quarters done as I type this. I decided to try to use the PC while I ran the migration from console 1 (Ctrl Alt F1). In preparation for this I closed programs I suspected would be most affected. Such as:

  • Firefox 3.x – which will be replaced with Firefox 4.x
  • OpenOffice.org – which should be replaced with LibreOffice
  • Gnucash – which has my accounting data I do not want to risk
  • Kopete – which is being upgraded

To access our company site and begin this article I kept Opera open. I did try to print a page from Opera and crashed Opera once while running this migration. I forgot about the migration running, or I would not have tried that. I also am able to use light applications such as gedit, but still cannot print from those. I do still have access to the LAN and the internet so the system is usable. But the system is not as useful with not being able to print while the migration runs. Of course, problems like that were not unexpected.

The system has not gotten to the point that X is unstable or anything like that. Which is pleasantly surprising to me. I had a 50/50 expectation that X would crash while this migration ran. I am only continuing to try to use the system so I can report to our readers about the experience. Otherwise I would close X, switch to runlevel 3 – which can still be done as I am not yet forced to use systemd, then run the migration at the console without running a GUI at the same time.

I am about to close X since I see X stuff being migrated. I will reboot following the migration. I am interested to see if everything “just works” or if I will have to fix something before I can get back to using the PC. I will be back to report more …

It is about 1.5 hours later and I’m back. Here are some interesting items about this migration:

  • Migration began at 11:30 AM CDT and finished installing all the packages at 6:00 PM CDT.
  • In total there were over 2600 packages migrated.
  • The average download speed from my chosen mirror over my broadband connection was around 400k.
  • The 16 GB /usr partition got to 94% full due to having several old kernel-source packages installed. These were all removed following the migration.
  • There were several hundred “orphaned” packages after the migration. These were removed with the command: urpme –auto-orphans.

My use of the proprietary nVidia driver was picked up and followed through to the new system because I enabled the ‘tainted’ repository (see Edit below) prior to migration. I did notice several old game packages being migrated that I have been running from source builds. So, I do not need those packages. These took up time and space and had to be removed following the migration. In hindsight, I should have gone through and removed unneeded packages before migration.

I did have to restart the migration with a specific mirror at the beginning. The mirror chosen for me by the command –

# urpmi.addmedia –distrib –mirrorlist http://mirrors.mageia.org/api/mageia.1.i586.list

– was a mirror that was across the Atlantic from me and very slow. So I instead used the command –

# urpmi.addmedia –distrib (mirror_url)

– to choose a faster mirror closer to me. Where (mirror_url) is replaced with the HTTP address of the mirror I chose. In all, the migration went very smooth following the directions given by the Mageia people.

Discuss this article on:

Edit Mon Oct 10 21:28:08 CDT 2011: I discovered later this was actually due to using the ‘nonfree’ repository, although I did have ‘tainted’ enabled for the migration.

Open Source: Mandriva 2011 vs Mageia 1

Are you in the market for a new laptop, desktop or server PC with Linux installed? Please give us the opportunity to quote a preloaded Linux laptop, desktop or server system for you.

By the way, if you did not read my previous article, Open Source Horror Story – A Linux Recovery Tale, you do not know what you missed. Basically the article is about recovering from a failing hard drive after an attempted upgrade of Mandriva to the 2011 release. The article is written in 3rd person from a story teller’s point of view. It has some good information in it for those of you who may find yourself in a similar situation. Go have a look, and make a comment if you wish. Okay, enough about that, on with the new article.

As of today I find myself in the position of deciding whether or not to stick with my previously preferred distribution, Mandriva Linux. This is a bittersweet realization for me. I found Mandrake Linux several years ago in the early 2000’s, about the time they were working on coming out of bankruptcy. When I saw and understood the command-line urpm* package management tools for the first time I immediately “fell in love” with them. In my mind those tools were, and still are, one of the best package management implementations in all of Linux. At that point, Mandrake Linux became my distribution of choice. When Mandrake merged with Conectiva and reorganized to become Mandriva, I stuck with Mandriva. When Mandriva narrowly avoided another bankruptcy, I stuck with Mandriva. When Mandriva development seemed to be imploding and many developers left or were fired, I stuck with Mandriva. Now Mandriva 2011 is out, and Mandriva seems not to be “sticking with me”.

My preferred “desktop environments” for X on Linux are in this order: fluxbox, XFCE4, WindowMaker. Notice something? You got it!  Those are all “light” window manager / desktop environments, a category that does not include KDE or Gnome. I have never been a fan of desktop environments that are more resource hungry than most of the applications I want to run. I am even less fond of the direction both projects, meaning KDE and Gnome, are taking with their current  DE implementations. I stick with minimalist GUI implementations such as those mentioned in the first sentence of this paragraph. Now with the release of Mandriva 2011 I see this disturbing, to me, tidbit on the Mandriva Linux 2011 Release Notes:

Deprecation

GNOME, Xfce and other Desktop Environments (DE) and Window Managers (WM) are no longer included in the official Mandriva packages. Contribution packages from the Mandriva community are available for these desktop environments however. Starting from Mandriva Desktop 2011 only KDE Plasma Desktop is officially supported. If you need Mandriva with another DE or WM you can use unofficial packages or distributions prepared by community members (which are described below).

Wow. Does that suck or what? I have seen the new ROSA interface for KDE on Mandriva 2011. All I can say about it nicely is, “That is not for me.” The new community driven Linux distribution called Mageia, which is based on Mandriva 2010.2, has my beloved urpm* tools and will still “officially” supply / support fluxbox, XFCE4 and WindowMaker. Not only that, but after having had to do one fresh Mandriva 2011 install after a problem with a failing hard drive, I found out I have a strong dislike for the new Mandriva GUI installer. I really prefer the older Mandriva installers that work like the one in Mageia 1:

Installing Mageia 1

OGG Theora Video best viewed in Firefox.

Finally, Mandriva 2011 is to the point of switching from sysvinit to systemd for bootup. Yes, one can still run sysvinit with Mandriva 2011. But since sysvinit in Mandriva 2011 is deprecated I suspect it may become broken with subsequent updates. My suspicion may turn out to be wrong, but why should I take the chance? While I understand systemd on Linux is probably the future for us all, I am not yet ready to switch. Mageia 1 still uses sysvinit for bootup at this point with systemd possibly arriving with Mageia 2. This gives me a bit more “wiggle room” to learn about systemd before I take the plunge into using it on my systems.

Due to all of the above, but specifically the DE part, I am now seriously considering a move to Mageia. In fact, while writing this article I have convinced myself it is time. I am researching my needs in anticipation of switching to Mageia this very weekend in fact. By the time you read this article I may already be in the middle of a distribution switch or finished with same. Once I do switch and have a chance to become more familiar with Mageia I will begin writing about that distribution here on The ERACC Web Log.

Obviously, my choices here will not be the choices that others will make. Regardless, I am hopeful the information I give here may help someone else with his or her own decision about a distribution to choose.

Discuss this article on:

Open Source Horror Story – A Linux Recovery Tale

Are you in the market for a new laptop, desktop or server PC with Linux installed? Please give us the opportunity to quote a preloaded Linux laptop, desktop or server system for you.

Hi children! I know it is a bit early for scary tales. We usually get to those in October. But I have one for you that you just might want to hear now. So. Get your hot cocoa, your S’mores and your sleeping bag and come over here by the fire. I have a tale of chills and thrills to tell you young’uns. There now. Are you all snuggled in and ready for a scary tale? Good. Here goes …

It was late on an August evening. August 30th to be exact. A brave independent consultant and Linux administrator was finishing up a long, slow upgrade from Mandriva 2010.2 to Mandriva 2011 for a client. He had noticed the upgrade was taking an excessively long time, but as this was only his second upgrade of the new release of Mandriva, he chalked it up to the new release of Mandriva. Little did he suspect the slow upgrade was due to … due to … oh, I can hardly say it to you sweet, innocent young’uns. But to tell the tale properly I must say it … A FAILING HARD DRIVE! (Look! I have goose bumps!)

When he rebooted following the last stage of the upgrade, he saw a … a … a … KERNEL PANIC! The system could not find the root / boot partition. So, he booted a PartedMagic Live CD to access the drive and see what was wrong. But PartedMagic refused to mount the partitions too. When he checked with GParted he saw that the /home partition, which he knew to be an XFS file system, was being “reported” as a “damaged” EXT4 file system. This looked bad. Very bad. So, he ran GSmartControl and tested the drive. Oh no! The drive was giving errors by the megabyte! Oh the horror! The angst! The tearing out of the hair … Okay, so he’s 50ish and mostly bald on top with a ponytail. He really avoids pulling out what hair he has left. But you get the picture.

Okay, not to worry. He had sold the client a new, spare hard drive just the right size to replace the failing drive. He also “knew’ the client had backups, because he had set up the backups for them and told them how to run them. Plus they had periodic automatic backups as well and had been told how to check that the backups were running and completing successfully. But when he checked for the most recent backup … it was in May! No one had been running the manual backups and the automated backups were returning error logs that NO ONE WAS READING! (Yeah, he should have run an “extra” backup himself, but time was pressing because he had a time limit from the client to get the upgrade done. The time limit left no time for a backup.)

Now things were starting to look grim. He knew that losing three months of financial data stored in QuickBooks in the XP Professional virtual machine on the /home partition of the client’s drive could be a disaster for this small business client. Thinking it over, he decided the only solution was to run xfs_repair on the /home partition. So he did. Lo and behold, it worked! Well, somewhat. There were hundreds of megabytes in lost+found but the user directories showed up and most of the files were there, including what appeared to be the XP Professional virtual machine directory named .VirtualBox in the user account that ran the VM. Unless you have been in this position, my children, you have no idea the sense of relief this brave Linux denizen felt. But it was a premature relief, as you shall see.

He immediately shutdown the system and installed the spare hard drive. Then our brave lad rebooted with the PartedMagic Live CD and ran GParted again to create a new partition layout. Then he ran Clonezilla to clone the recovered /home partition to the new drive. Keeping his fingers, toes, arms, legs and eyes crossed for luck. (Did I mention he is a contortionist? No? Well, he’s not. That sentence is just for “color”.) The clone completed successfully and our intrepid Linux fellow shut down the system, removed the naughty hard drive, and gave it proper rites before smashing it with a sledge hammer. (Yeah, you guessed it, more “color”.)

Then he reran the “upgrade”, which was now morphed into a fresh install of Mandriva 2011 on the new hard drive. It was 4:00 AM on August 31st at this point. He was now into his 14th hour of an “upgrade” that had been supposed to take less than six hours by prearranged agreement with the client. By 7:30 AM, when the client’s staff began arriving, he had the system “finished”. The printer was printing. The scanner was scanning. The VM was booting. The rooster was crowing … just checking to see if you are paying attention. All appeared well and the client was understanding about hardware failures happening. After going over backup procedures with the client, again, our weary Linux consultant headed home for a short nap before starting his new business day.

Later that day he received a call. Yes, children, it was the client. The QuickBooks data was showing nothing past April 2010. Since this was August 2011, that was a Very Bad Thing. So, our fine Linux fellow headed back to the client and the “problem” system as he was now calling it. Upon review he discovered the restored virtual disk was one that had been a backup made in April of 2010 prior to an upgrade of VirtualBox at the time. Where was the most recent virtual disk with the client’s data? Gone. Vanished. Eaten by an evil hard drive. But, a light appeared above our hero’s head! Due to having had some sleep and some caffeine, he remembered that QuickBooks had been reinstalled with a new release in late June of 2011. He Had A Backup Of The System On A USB Drive From That Day! Yes, it would still mean losing two months of data. But that was much more acceptable in the client’s view than losing a year and a half of financial data. Which would mean near certain doom for almost any small business.

So, our Linux protagonist retrieved the USB hard drive, attached it to the system and ran a restore to get the virtual machine back from June 2011. This worked successfully and the VM booted. A check of the VM showed the data from June was there and intact. Our nice Linux guy packed up his gear, went over backup procedures with the client, again. (See a trend here?) Then headed home for supper and a good night’s rest. The End …

Well, not yet. You see, losing data really irritates our Linux Paladin. His mind would not let go of the problem. He kept thinking there was something he missed. Something he could have done to get all the data back. Something … something … some* … Ah HA! He recalled that lost+found directory with the hundreds of megabytes in it! He quickly called the client and arranged to go on-site after hours on that 1st day of September 2011. He combed through the lost+found directory with the ‘find’ command searching for files around the correct size of our missing, most recent, virtual machine file. There was one hit, just one. But it was enough. He had found the latest copy of the virtual machine. After making a backup(!) he copied this file to the correct directory, set back up the virtual machine using this found file and all the financial data was recovered. Everyone rejoiced and there was much feasting. (Yep, “color”.) The Real End.

What is the moral of our story young Tuxes? It is this: Never rely on someone else to do a backup. Backup, backup, backup, backup, backup for yourself. Then when you think you have enough backups, do another backup. You can be sure our Linux star has learned that lesson … again.

Discuss this article on:

Linux Hardware Support Better Than Windows 7

Are you in the market for a new laptop, desktop or server PC with Linux installed? Please give us the opportunity to quote a preloaded Linux laptop, desktop or server system for you.

I will start this off by adding, “… with the exception of some wireless chip sets and high end graphics cards.” to appease those of you who will act like Arnold Horshack (1, 2) if that is not mentioned. If there are other unsupported devices on Linux that are supported in Windows 7 feel free to scratch your itch and tell me in a comment.

The concept of better is a subjective idea. What is better to me is possibly, even probably, not better to someone else. In my case, and in the case of some of my clients, Linux hardware support is “better”. I do not buy cutting edge hardware and tend to keep systems and peripherals until they stop working and can no longer be repaired at a reasonable cost. When a new release of my favorite Linux distribution comes out I can be 100% certain that my hardware that works with my current release will still work with the new release. That is something I just take for granted. This is not so in the Microsoft camp.

For those people who hold on to working hardware through new Microsoft versions, their hardware may or may not be supported in a new release of a Microsoft OS. Take the example of a recent conversation I had with the manager at one of my client offices. I will call her “Mrs. B” here. Mrs. B is a Microsoft fanatic and will not even consider switching to Apple, much less Linux. When I mentioned switching to Linux for her office desktop during our conversation she laughingly said, “Gene, you know better than that.”, because we have had that discussion before. This came up in our recent conversation about her HP Photosmart 1115 printer.

Mrs B recently had to purchase a new PC for her office use because her old Microsoft XP Professional based PC died. She bought a cheap, commodity PC with Windows 7 Home Premium installed from an on-line discount store. She did not check whether or not her existing peripherals were supported. Why should she? They worked before, so they should still work. Correct? Not so correct. You see, HP has, for whatever reason, decided to not make drivers for the Photosmart 1115 for Vista, much less Windows 7.

Mrs. B had asked me to see if I could help her get her printer working on Windows 7 because she could not find the driver CD. So, I went to www.hp.com and did a search for drivers for her. I already suspected that HP had not created drivers for that model, and I was correct. I informed Mrs. B and mentioned that the printer does have support under Apple OS X and Linux. So maybe we could switch her to Linux so she would not have to get rid of her still working printer just to buy one that has Windows 7 drivers. That is when I got her response above. So, Mrs. B will be buying a new printer and either throwing away or giving away the still functional Photosmart 1115 printer.

While at HP’s web site, just for curiosity’s sake, I looked at the list of unsupported products in Windows 7. That is quite a list. Then I took items from the list at random and checked to see if HP reports they are supported under Linux. Oddly, some of the items in that list do have Windows 7 drivers. It seems even HP is not sure which of their products are not supported. Some of the products are not supported under Linux according to the HP driver search for them. Those also only have drivers for Microsoft Windows 3.1, Microsoft Windows 95 and Microsoft Windows 2000. It is possible these very old models are “win-printer” types that are gutted of any stand-alone capability and require a driver to function at all. But the other models I looked up all had support under Linux listed, but no support under Windows Vista or Windows 7.

One problem here is that Microsoft drivers are so closely tied to the system kernel that a new release of the operating system breaks old drivers. Under Apple OS X and Linux this is not a problem because most drivers, including those for printing, are separate from and not tied to the kernel. On Linux any driver that does require a specific kernel can be, and usually will be, easily recompiled by a distribution’s maintainers and released along with the new kernel. If the driver works with DKMS, even better. Printing runs as a separate subsystem, usually using CUPS. So, if one’s printer worked with Fedora 9 it still works with Fedora 15 and will probably still be able to work with Fedora 25 or whatever Fedora releases may be called later. So, one’s beloved Photosmart 1115 printer can still be used under Linux while it cannot be used with Windows 7. In my book, that is better hardware support with Linux.

These days I will only purchase new peripherals for my SOHO that specifically state they have Linux support or are shown to be supported by open source drivers. If the package says “Linux” on it, I also try to take the time to send an e-mail to the manufacturer letting them know I chose their product because they took the effort to put on the packaging the fact they support my preferred OS. This is my small effort to keep these manufacturers interested in supporting Linux. Perhaps you can do the same.

Do you have your own “peripheral horror story” with a Microsoft OS? Feel free to post a comment about it.

Discuss this article on:

Open Source: Multitasking with X and Linux

Are you in the market for a new laptop, desktop or server PC with Linux installed? Please give us the opportunity to quote a preloaded Linux laptop, desktop or server system for you.

I am talking about human multitasking here, not operating system multitasking. Every time I have to work on repairing a Microsoft Windows based system for a client I feel as if I am restricted to typing with one finger while blindfolded. Okay, I admit that is hyperbole. But after using X with fluxbox and Ten Workspaces on fluxboxten workspaces as well as the Linux console (1, 2) with screen for several years now, a Microsoft OS experience seems very restrictive to me. There are no multiple workspaces by default on a Microsoft desktop and adding anything to give a Microsoft system multiple workspaces would not be the Right Thing To Do when working on someone else’s PC system.

Another restriction is with multi-user on Microsoft. With my Linux desktop PC I have a user for work related tasks and a user for relaxation and gaming tasks. I can keep the work user logged in, switch to a Linux console using Ctrl Alt (F1-F6), login the game user, start a second X GUI session with startx startxfce4 — :1 and play a short game while “stuff” keeps running under the work user in the first X session. If I am playing a buggy 3D game that may crash X, I have no worries about my tasks in the other X session as they would be unaffected if a poorly designed 3D game took down the second session. I can do this “out of the box” on a typical Linux distribution installation. If you are from the limited Microsoft universe you have no concept to compare to this on a standard, out of the box, Microsoft desktop PC. Yes, you can switch users. But if you switch users as I do to play a game that subsequently crashes the Microsoft GUI called “Windows” it all crashes. Not just the session where the faulty program broke the GUI. It is also truly simple to switch between or among multiple sessions of X on a Linux PC. Just use Ctrl Alt F# to switch back and forth, where # is the virtual terminal number for an X session. For example, my first session is on virtual terminal 8 and my second session is on virtual terminal 9. To switch between them I use Ctrl Alt F9 and Ctrl Alt F8.

I am a “computer guy” and am one of the people that other folk call when they have a computer problem. Several of my clients are set up with VNC software so I can connect to their systems over the internet. For the clients that are running a Linux server or desktop PC from my company, I can connect with SSH to provide support. Some of these clients have contracted with my company to handle their Microsoft updates each month. So, for a fee, I do the boring work of making sure their Microsoft updates apply each month and resolve any problems that may arise for them due to a Microsoft update failing or conflicting with installed third party software. I also check third party software to see if those too need updates, as there is no central repository for updating everything as there is for a typical Linux distribution. I use UltraVNC at the client sites, because it has a Win64 version, and Vinagre with Two SSH Sessions RunningVinagre here at the SOHO. With Vinagre I can connect to multiple systems, using SSH and VNC, and be working on each of them simultaneously. My limitation is usually the “speed” of the internet connection at each end. Usually more than two client systems updating at the same location while trying to use VNC will cause remote sessions to timeout and drop. But, I can easily connect to two, three or four PC systems at two, three or four different client sites simultaneously and work on their systems remotely at night without a noticeable problem on my end. Being able to do this all with “free” software both at the client and on my side means I can multiply my ability with multitasking to support my clients without breaking the bank for my small business.

My Linux “desktop” is fluxbox as I stated above. I also stated I have ten workspaces. In my experience workspaces allow more efficient human multitasking. I do not suffer GUI clutter as one does on a single workspace system such as Microsoft’s “Windows”. I keep several programs running on my system all the time. Each workspace is designated to a specific task. So I know if I need to create a Labor Sheet for a client and make an invoice to send them, that is workspace 8 where I have Writer and GnuCash running. If I want to chat with a friend on Instant Messaging that is workspace 5 where I have Kopete running. If I want to browse the WWW or write an article for blog.eracc.com, that is workspace 3 where Firefox and Opera are always running. Should I want to listen to music while I work that is workspace 6 which I designated for multimedia tasks, meaning music and videos. With these multiple workspaces I can keep programs running with their window showing. So, when I want to switch to a running program, I just switch to the workspace where the program is running. I do not have to hunt through a list of minimized programs on my task-bar. Thus saving my time to be more productive.

What does this mean to you if you are a Microsoft user? Well, until you can experience multitasking on a Unix or Linux PC this will mean little to you. I regularly get comments like, “Oh, that would just confuse me.”, or, “I see no need for that.” from my Microsoft using friends. But my Linux using friends will likely all nod sagely and smile when I talk about work efficiency and human multitasking on Linux. Until you have experienced it and used Linux enough to get familiar with this, you will not really understand how beneficial multitasking with Linux can be for you. But if you come from the Microsoft camp to the Linux city and do give multitasking a serious look, and I mean long enough to get familiar with it, you might just get a clue about what I mean here.

  1. Meaning the text mode interface for Linux where no GUI or window manager is running. (back)
  2. Where is the Microsoft console? You may say there is one, but not really. Not compared to what one has on Linux and Unix. (back)

Discuss this article on: