GNU/Linux Security: Linux House vs Microsoft House

This is the second article in my series about GNU/Linux security for the GNU/Linux curious and new GNU/Linux user. The first article is here: GNU/Linux Security: Ubuntu has been Cracked!

There are many attempts to explain the differences between GNU/Linux and Microsoft products when it comes to security. In this article I am going to make yet another attempt. I want to make this as simple as I can for the non-technical users out there. Especially those that are using Microsoft products and cannot conceive of anything that is more secure by default. If you are a technogeek god then ignore the fact that the explanations here are very simple. If you, in your great geekness, want to expound further then feel free to post a comment.

At base the Microsoft products all go back to a core that is built on the MS-DOS concept of a single task, on a single computer for a single user. There is little need to be concerned about security with such a design. This is a fine concept if one never attempts to use such a system for anything other than a single task, on a single computer for a single user. But that is not what Microsoft has done. The Microsoft products simply kept that single user, single computer base technology and added on multi-tasking (Running many programs at one time.) and networking (Connecting many computers together for sharing data, printers and so on.) Later multi-user capability (More than one user on a computer at the same time.) was added on top of this single user, single tasking core. Granted the multi-user capability is not really present in Microsoft desktop products, so we can ignore the fact that one may create multiple user accounts on a modern Microsoft based desktop system. I will call the Microsoft model a one-one-one model. (See comment #15 below from “paul”, he explains what I mean here better than I have myself.)

The problem with adding on these multi-tasking, networking and multi-user capabilities to the Microsoft one-one-one products is that there appears to originally have been no concern for securing these systems. The security concern only began once people began to see systems being cracked and exploited “in the wild”. However, there was a serious problem with securing these systems. To correctly raise the security bar for Microsoft systems “out of the box” the core of the operating system should have been redesigned from scratch. The backwards compatibility that has its roots in that single task, single user, single computer model would have to go away at some point. Apparently the high and mighty Muckity Mucks at Microsoft made an executive decision to not do that, ever. So, today we have Microsoft Windows 7 released and containing roots going back to that insecure one-one-one operating system design.

How is GNU/Linux different? A GNU/Linux desktop system is designed from the ground up along the Unix model of multiple tasks with multiple users among multiple computers on a network. I will call this a many-many-many design. As such the basic design also includes consideration for securing the operating system and data on same when many users may have access to the same system simultaneously. Therefore, when a GNU/Linux computer is taken out of the box for the first time it already has a higher security capability. This is because of the many-many-many design that included consideration for security from the beginning.

How does this apply in a real world scenario? Okay, because of the original flawed design decisions by Microsoft many third party software packages require that a user be running as a system administrator with full access rights to the computer, including to system files. So, by default when one pulls out a new computer with a Microsoft system installed the users are created as “administrator” users. This is a problem because now this administrator user can browse to an infected web page and see a pop-up with an “anti-virus” warning. Then our poor user will click the close button on the pop-up and become infested with “Antivirus 2010” or other fake anti-virus program that at minimum is irritating but may also have broader security implications by then installing other malware (Malicious Software) that can steal personal information. Because the user is an administrator with full access to the operating system’s files the malware that starts from the web page also has full administrator access and can install itself with impunity.

How can I blame Microsoft for these third party software packages and/or users being set up as administrators? Why not blame these third party software designers? Well, I do blame poorly written software that requires administrator access to work correctly. But I also blame Microsoft. Because Microsoft made the poor decision to stay with their one-one-one design and just “improve” it. At first the only way for any software to work correctly with these “improvements” was to have administrator access. Over the years this has changed, but rewriting all software to these new, more secure specifications is a slow and expensive process for the software companies involved. Microsoft should have scrapped that one-one-one model and redesigned the core operating system from scratch. That redesign should have looked something like Unix … or like GNU/Linux.

The GNU/Linux many-many-many system on the other hand works just fine when a plain user who is not an administrator uses programs on it. So, no software run by the user can affect system files. Further, no software on GNU/Linux is designed to automatically allow software to run from a web browser or e-mail application without the user’s knowledge. No open source developers I know are silly enough to think having such “capabilities” is a good idea. So, when our dear user browses to an infected web site and sees a pop-up about an anti-virus infection she can safely close that pop-up without worrying that an infection will occur in the background that will take over her computer. It is very unlikely that a web based malware script written with GNU/Linux as the target could find a way to even infect the user’s home directory. Why? Well, software that is downloaded from a browser instance is not set as executable. So, even if a browser could be made to download a file without the user knowing it the user would have to make changes to the file permissions to make it executable. There are no .EXE, .COM, .BAT or other files on GNU/Linux that can be run just because of their file extension. A file has to be a compiled application or a script and be set as executable before it will run. This automatically makes it much more difficult to infect a GNU/Linux system behind the user’s back. The effort required is much greater than with Microsoft based systems where the file extension makes the application or script able to be run.

I created a script and uploaded it to my web site to demonstrate this. Here is what a “ls -l” file listing of that script looks like when first downloaded:

-rw-r–r– 1 gene users 73 2009-10-23 22:28 a_script_for_you

See that “-rw-r–r–“? That means the owner of the file, the “gene” shown after the “1”, can read it and write to it but not execute it, “rw-“. The group, the “users” shown following “gene’, and everyone else, not shown but implied, can read but not write and not execute the script, “r–r–“. The dashes are placeholders for the bits that allow writing, “w”, and executing, “x”, of files. Now I will change the permissions on the script by hand and run it:

[gene@era4 ~]$ chmod 700 a_script_for_you
[gene@era4 ~]$ ./a_script_for_you
I can only run if you use the command ‘chmod 700 ./a_script_for_you’ or similar!

See? I had to explicitly intervene to make that script run. I would have to do the same if I downloaded a program from a web site. Browsers on GNU/Linux have no ability to change the script to be executable on my system without my knowledge. I have to be involved in the process, so I have to be convinced that making this program or script executable is a good idea. If this script comes from the “Joe’s Bar and Grill” web site and purports to be an upgrade for Firefox I am going to be very suspicious about making it where it will run on my computer. So should you. Social engineering attacks, where the bad guys convince a user to do something stupid, can still occur with GNU/Linux. So beware and be informed about those. But automated attacks that get system level malware installed through the browser or through e-mail are quite impossible on GNU/Linux.

This brings me to my illustration of the Linux House versus the Microsoft House. The Linux House is built with bullet-proof windows that are closed and locked. There are thick steel bar grills over all the windows. The Linux House has thick concrete walls, roof and floors. The Linux House has thick solid steel, bunker doors that bolt at both sides, the top and the bottom. Any thief that wants to get in and steal your family heirlooms is going to have to have some serious means of breaking and entering, like a bazooka or a tank. Yet all the security of the Linux House is behind beautiful and functional facades and the typical resident can be blissfully unaware of it most of the time. On the other hand the Microsoft House is pretty much like your house you live in now. It is quite adequate for day to day living but it is no serious impediment to a thief that wants to get in and steal your jewelry. It has plain old Windows. The thief can pretty much just break those Windows and climb in at will. You see, plain old Windows are no real way to stop a thief.

Can Microsoft operating systems be secured? Yes, they can, up to a point. But the starting point to secure Microsoft operating systems is far lower than the starting point for GNU/Linux systems. However, the flawed original design of Microsoft operating systems that underlie all modern versions of Microsoft operating systems keeps them more amenable to attack even when as locked down as possible. Of course, in reality, the only truly secure computer is one that is never used, by anyone. But then again, no one is going to spend money on a computer that cannot be used.

Any of you serious security types that want to share more information about GNU/Linux and its security by design model or have better illustrations than mine, please leave a comment.

This article has had this many unique visitors:

Powered by school website.

Notice: All comments here are approved by a moderator before they will show up. Depending on the time of day this can take several hours. Please be patient and only post comments once. Thank you.


Published by

Gene A.

Gene is a "Unix Guy", network technologist, system trouble-shooter and IT generalist with over 20 years experience in the SOHO and SMB markets. He is familiar with and conversant in eComStation (a.k.a. OS/2), DOS (PC, MS and Free), Unix, Linux and those GUI based systems from Microsoft. Gene is also a follower of Jesus (forgiven, not perfect), and this does inform his world view.

30 thoughts on “GNU/Linux Security: Linux House vs Microsoft House”

  1. I am reading reviews about Windows 7 today. Supposedly Windows 7 includes more of an attempt at user privilege separation. Something that has been in GNU/Linux from the beginning and in Unix for 40 years now. We will see if this helps at all to make Microsoft desktops more secure if Windows 7 is widely deployed. Of course once one compares GNU/Linux to Windows 7 the choice might be GNU/Linux after all. Time will tell.

  2. Ah, Windows 7 appears to be a bit of an attempt to close the door on the one-one-one design of legacy Microsoft operating systems. Does it really? Who knows? We can’t see the source code to check on that. 🙂

  3. Windows may be secure by adding software such as anti-virus programs.
    GNU/Linux is (more) secure by default, but adding software/features might harm the system.

    How can it be that a customer (say: my grandmother) buy software (Windows), but needs to add virusscanners because the software is not secure enough by default?! It’s just like I would have bought a car, and then the dealer tells me the seat belts were not included in the design.

  4. Just quote this epigram:

    In Windows everything is permitted except what is explicitly forbidden;

    In Linux/Unix everything is forbidden expect what is explictly permitted.

  5. Your one-one-one approach for Windows is flawed.
    The recent releases of Windows (7, Vista, XP and 2000) all have as ancestor the Windows NT which did not originate in MS-DOS. NT was built with multi-user, multi-tasking and networking in mind. However, Microsoft lowered the security standard from the original Windows NT to accomodate gamers. Otherwise, the NT line of Windows would have stayed in the office and server world and could not have been this wide-spread as XP did.

    Just to set the story straight.

  6. Through other reading, I’ve gotten the distinct impression that the Windows NT kernel is/was in fact very secure. Perhaps at the kernel level, it’s even more secure than the default Linux kernel, until you start adding features like SELinux and the like.

    But the real problem is the userspace added on top of that kernel. The default Linux userspace maintains the security of the kernel, whereas the default Windows userspace ignores it. It’s more of a software culture issue than a capability issue. If Windows had a different culture, it could be much more secure. As you have said, it would start by making users non-administrator, by default.

    I’d also like to see further improvement on Linux, not just with stuff like SELinux. For instance, I’d like to see internet-functionality like web and email sandboxed by default, and require additional user interaction to extract anything from the sandbox, much less make it executable. Let the user KNOW that this downloaded stuff is carefully confined for his own benefit, and be careful what is taken out of that confinement.

  7. The problem is, people who live in the Microsoft house often come face to face with a Linux house. And they aren’t quite sure what to do when their house turns around and demands proof of ownership from them to do something (gksu/sudo). Where as a Linux dweller accepts this as being a necessary evil and well… better that than a tank sized hole in the side of the building for someone to drive through right?

    Microsoft did, to their credit, attempt to impliment their own challenge starting in Vista, UAC. Unfortunatley because Microsoft were so busy replumbing the bathroom and putting in a fresh looking living room it didn’t quite work and slowly drove people up the wall as like a poorly trained guard dog it barked all damn day and night.

    The main problem stems, like a bad house, from the builders. Windows 2000 was going to be a total rewrite of NT (the bricks of the Windows house). It wasn’t. Nor, despite the builders assurances was XP. Vista didn’t deliver on their promise either and Windows 7 sure doesn’t. Proof being (when all three were supported) security updates for NT4, Win2K and XP… for the same fault. I’m sure that if all NT’s were supported we’d see a security update for something spanning NT4 to Windows 7; which if your builders were telling the truth couldn’t happen, totally different houses after all.

    As Gene (#6) points out, if you want to know how secure the Linux house is or you can be bothered to find out, the blueprints are avaliable. If you really want to modify the house you can. You only have your Microsoft builders assurances, they won’t show you the prints so you can check yourself. Which as any good home buyer should know, if you can’t see the architechts drawings… what are the builders trying to hide?

  8. Michael (comment #10) thanks for the commeht.

    I have read in the past that XP at the kernel level is actually a merge of the NT kernel line with the DOS based kernel line. So, while NT/2000 were pure NT kernel the XP and later lines are not. Of course, once again with feeling, we cannot see the source code to verify that. 😉 So, we have to take the word of the builders as Sarah points out in comment #13 above.

    Edit: Ah! Found it. Here is a quote from one of Microsoft’s own documents:

    Windows XP, as the merge of the Windows NT platform with key elements of the
    Windows 9x experience, builds on the single world-wide source code of Windows 2000
    to provide even richer international functionality; a more comprehensive
    implementation of Unicode, support for more locales and languages and an improved
    user experience. These new features are outlined below.

    You can see the document (PDF) here: Unicode and the Next Release of Windows. What do they mean by “… key elements of the Windows 9x experience …”? That is so freakin’ vague. I guess some of our guys will check the source code … ooops. 🙂

  9. Your article has some truth to it, but it is also flawed. Since Windows NT 4, Microsoft is not using a kernel designed after the old DOS (Windows 95-ME) model. Although you do not explicity state that the newer MS OSes are using the DOS model, it is implied.

    Microsoft developed the NT kernel for Networking and using the Network design of a multi-user system. Therefore, at the foundation, the Windows kernel is a multi user system OS. It is not based on the single user model and has not been really since Windows NT/2000. The NT actually stood for Network computer.

    That is where your article may be misleading. However, the truth or germ to your article deals with the Desktop portion of the Windows OS, or the GUI layer. It is here that MS has an old design that causes and will cause multiplicities of security holes and problem domains.

    With the advent of Windows 3.1, Microsoft wished to develop a Common Object layer that would allow applications to communicate with each other, using Objects that could be shared. This was commonly referred to a DDE (Dynamic Data Exchange) or OLE (Object Linking and Embedding). This was a model designed to allow programs written for Microsoft Windows to share information and objects among each other. These OLE or DDE based objects could communicate with each other effectively and efficiently allowing programs the ability to interact and share objects and data.

    The results of the OLE and DDE work revealed a great way for programmers to work with and exchange data between disparate applications. OLE was an astounding success. My Word processor and Database application can communicate with each other rather uneventful. MS continued to expand upon this model as they entered the 32 bit world, where it became known as COM and now COM+.

    COM was a hit with programmers and programs that needed to interact with each other. Wonder why MS programs can communicate with others, it is due to COM. Wonder why I, as a programmer, can use MS Office with my custom apps, or customize Office, Outlook, and Visio with my custom applications? It is due to COM.

    However, COM was meant only as a single user or desktop application process. When MS started working in the LAN world and later the WAN world, they needed a COM based model for application to application communication with the intricacies of their core event model (which COM uses). So they developed DCOM or Distributed COM.

    DCOM allowed MS to to make programming extensions to services and applications easier to work with on the LAN based systems. By allowing COM objects to be accessed remotely, MS allowed program to program communication based on their highly successful COM model. Most of the base COM APIs are part o the core of MS’ application programming model. ActiveX grew from Distributed COM.

    The benefits for MS and the MS developer is that it is rather easy to provide application to application communication and low level API processes through the COM API (the HAL layers also exposes COM based objects for device drivers, printers, etc). The downside is the same access to the COM communication layers are also available to the hacker and those who wish to write malicious code. Since nearly every MS application uses COM and COM+, and the base kernel APIs are also exposed as COM wrappers, this single or Desktop based model is easily exploited and thus this is why “script kiddies” are successful in hacking MS systems with simple JavaScripts. Once you know the Runtime Type Libraries for a COM object, if it belongs to the MS registry, you can use it. Some of them have protected right access, but most do not. That is the hole that is the MS OS today; COM. (Yes there are others, but this is the primary culprit).

    So why doesn’t MS just do away with COM and the COM model and create an OS that does not need COM? Why don’t they remove the COM based layers from the HAL and GUI layers (as well as hooks to the System Kernel Layer) and thus secure their OS?

    This is easy to answer and hard for MS to move away from. The reason is nearly every application that MS has is based on this model. Office, Outlook, Visio, Project, etc. Nearly all custom applications built in MS shops are also using COM whether they know it or not. If MS removes this model, all compatibility with applications using any part of this model, will no longer work. The MS OS becomes something else and that means firms would more than likely dump the MS OS and adopt something else. Why wouldn’t they, since none of the legacy apps they use today would work any longer.

    MS is a victim of its own success and marketing. They are struck with the COM and Event based model which propelled them to great heights in the beginning of the GUI interface days. They rode that success without much thought to the future. Gates had the correct vision of the computer in every home being used for common tasks. However neither he nor MS envisioned a network computer day, when all computers would be connected to a centralized network as the overwhelming majority are today through the WAN based Internet.

    Now, the Windows OS and the Windows applications are much too ubiquitous to allow a complete rewrite of the MS OS to properly fix the glaring issues that are at the core of the API model, which the Windows OS employs. Could MS fix it? Yes, but to do so means a complete rewrite of the application model.

    This is one reason MS embraced Java in the beginning and when they could not control Java, created their own clone called DOT.NET. If and when NET becomes the norm for all application development, MS could change the core API COM based model with something else, without breaking the application layer. Since it is the runtime that controls the communication with the OS, all that would have to be changed in a Java or NET based application is the runtime to call and use the newer APIs. The other option is to support a Hypervisor environment within the OS to support the old API model, similar to what Apple did when they adopted the BSD based kernel for OS X. But that does not work as well as native OS calls and needs work to prove effective, not to mention the costs involved for such a development efort.

    However, I am willing to bet MS is secretly working with an MS BSD kernel and the hypervisor model today. Since DOT.NET did not take off nearly as well as MS had hoped, MS has to have a fall back plan.

    Although nearly every MS based shop is doing at least some of their development with NET (primarily browser based applications), the Desktop based apps are still primarily using native API access. This means that they are using COM and COM+. That means that the applications as well as the OS are still vulnerable and shall be until this model is changed or abandoned. It may be years before MS finally cuts the ties, or it may be that MS loses so much market share that biting the bullet is easier to swallow. Until that time, MS uses should understand that they are truly vulnerable and do what they can to block access. If you are using Windows servers to provide external access in your organization, I really suggest you take a modern IT security course that is not sponsored by MS. You may truly learn something useful.

    One more thing that the MS shops should seriously consider and that is to purchase and develop all new applications in an OS agnostic manner. That way, you are not locked into MS not the vulnerabilities.

    Java is probably the more Enterprise level way of doing your new application development, but there is also Python, PHP, Ruby, and QT based C and C++. If or when MS does change the OS or if they continue to lose market share, you will be covered. Moving OS agnostic programs to Windows, Linux, Unix, or MAC is much easier than attempting to move an MS only based application to another platform. NET has its problems, so I do not suggest that NET be a real consideration unless you truly want to stay with MS only. Mono works, but does not guarantee 100% compatibility with MS NET. My experience is somewhere between 50 and 70%, which I find unacceptable.

  10. Thank you for your time and effort in writing and sharing this important information!

    John and Dagny Galt
    Atlas Shrugged, Owners Manual For The Universe!(tm)

  11. paul (comment #15) thank you!

    I really enjoyed reading your explanation of COM. COM/DCOM is a part of what I meant by the roots of modern Windows going back to the single user versions of the Microsoft system. I did not go into detail and research that to explain it as I am trying to keep these articles simple for the average user. I have a nebulous understanding of the COM/DCOM model myself so I would have had to do some in-depth research to explain it. Your explanation keeps me from having to do that. 🙂

  12. …That redesign should have looked something like Unix ? or like GNU/Linux….

    Or VMS — you may recall that MS hired Dave Cutler
    away from DEC to design NT. Cutler was the architect of VMS, another industrial-strength multi-multi-multi operating system. And the rumor has it that Bill Gates continually over-rode Cutler in the interests of backwards compatibility and his notion of convenience.


  13. Pingback: Anonymous
  14. So, if I understand the article and the above comments correctly, Microsoft is hoist with it’s own petard.

    Structural decisions it made to provide effortless, and consequently thoughtless, convenience as hallmarks of its user “XPerience” could not have more effectively constituted security vulnerabilities than if that had been their primary intent.

    Consider all the magic and mystery and wizards and such that typify the Microsoft experience. With an OS such as GNU/Linux, the user is expected, if not required, to be involved and knowledgeable…not in the extreme for most tasks, but at least conscious of working with something possessing power meriting some respect and responsibility.

    My assumption has always been that all the automation and “intellisense” and ActiveX, and COM/DCOM and RPC, and stuff going on certainly has the potential of bathing the user in a white light as if glory from heaven above, but almost by definition that consequent profoundly flat learning curve remains as such, failing to ever dip down with increasing user experience.

    It seems inevitable that the Microsoft user will ultimately be impeded by those design elements as they slowly gain insight, while the more challenged and engaged GNU/Linux users will become more and more efficient, and productive, as their learning curve turns into a ski slope before their very eyes.


  15. And the problem turns it self around….

    when all the windows people install linux and all the windows management now support linux and the java developers, develop on linux

    and have zero regard for linux or security…

    default installs become the norm…

  16. Here is how I have tried to use a safe user account on Winodows XP and what it means.

    Besides the usual (necessary) account with administrative privileges, I have set up a user account with restricted privileges to do some jobs which require more safety, like connecting to the internet. As has been said above, some applications do not work in acccounts with restricted privileges, so it is necessary to have both and switch frequently between them.

    If I switch to another account without logging off the current account, I get the famous blue (or black) screen of death. So this is another example illustrating how well Microsoft has implemented the many-many-many architecture.

    Whether it is in the Kernel, in the COM or any other layer, I do not care. I can only see the result that it is impossible to work in Windows other than an Administrator without experiencing serious annoyancies.

Comments are closed.