Original web location of this document: http://web.cuug.ab.ca/~leblancj/nt_to_unix.html

Migrate With Confidence From Microsoft Windows Servers to UNIX/Linux

Strategic Information for IT Executives and Managers



A white paper by Jon C. LeBlanc
IT Manager,
(Hewlett Packard Certified IT Professional,
Sun Microsystems Certified Solaris System Administrator)


Copyright © 2002 by Jon C. LeBlanc.
This material may be distributed only subject to the terms and conditions set forth in the Open Publication License, v1.0 or later (the latest version is presently available at http://www.opencontent.org/openpub/).
Distribution of substantively modified versions of this document is prohibited without the explicit permission of the copyright holder.


Originally published: March 29, 1999

Latest update: December 8, 2002


Web Location of this document: http://web.cuug.ab.ca/~leblancj/nt_to_unix.html

Spanish translation of this document, courtesy of TDLP-ES: (http://es.tldp.org/Manuales-LuCAS/conf-MigraNT2GNU/doc-migrar-nt-linux-html/)

Catalan translation of this document coming soon, courtesy of TDLP-ES: (http://es.tldp.org/Manuales-LuCAS/conf-MigraNT2GNU/doc-migrar-nt-linux-html/)

Danish translation of an earlier version of this document, courtesy of Anne Oergaard: (http://www.sslug.dk/~anne/) and Kim Futtrup Petersen.




Executive Summary



    This paper and site are intended to provide today's leading Information Technology executives, managers, system administrators, and purchasers with clear, brief, dispassionate, and factual arguments for migrating some or all of their corporate computing resources away from the Microsoft Windows NT, 2000, XP, and .NET Server operating systems to an attractive, time-tested, ever-more-popular environment: UNIX/Linux.

    In the corporate community, making the wrong choices can have devastating fiscal and productivity results in the extreme, and unsatisfactory results in the least. Compelling reasons (efficiency, security, performance, software and licensing cost) exist for migrating systems away from Microsoft Windows NT, 2000, and XP Operating Systems and for avoiding the purchase of Microsoft Windows .NET Server. Is this white paper applicable to your environment? If your major software applications run only on Windows operating systems and your organization is not well disposed to change from them, it likely is not. Nonetheless, this white paper will serve to illustrate future paths and alternatives. Additionally, a migration to Java-based services is recommended, but not discussed in this white paper.

    I have implemented Microsoft Windows NT and 2000 versions in global-scale computing environments, and I am in direct contact with corporate testers of Microsoft Windows XP and pre-release .NET Server versions. Concurrently, I have implemented and administered global-scale UNIX versions from Hewlett-Packard, Sun Microsystems, IBM, Compaq (pre-HP), and other UNIX vendors, as well as Linux distributions from Red Hat, Caldera, Mandrake, Corel, TurboLinux, and SUSE. I am evaluating Apple's OS X. As an experienced educator, IT manager, and multi-platform system administrator, my credentials will hopefully assure.
    (http://web.cuug.ab.ca/~leblancj)



Contents










The UNIX/Linux Alternative to Microsoft Windows Servers



    Variants of the UNIX OS (Operating System) have been in development or production for over three decades, making it one of the most stable, capable, trustworthy, and constantly improving operating systems available today for high-end servers and supercomputers, while also remaining the solution of choice for high performance workstations. Leading vendor varieties of UNIX are Solaris from Sun Microsystems, AIX from IBM, HP-UX from Hewlett Packard, True64 from Compaq (pre-HP), IRIX from SGI, and SCO from the Santa Cruz Operation (now part of Caldera). These commercial OSes are different in their minute particulars, yet adhere to generic UNIX specifications. Other UNIX varieties are offered in a considerably less commercial vein, yet are fully featured and capable of front-line use in many instances: FreeBSD, NetBSD, OpenBSD, and others. When Apple concluded that their Macintosh OS had soldiered on well beyond its years, they turned to UNIX as the basis of their modern, greatly superior replacement product called OS X.

    As an offshoot of UNIX with over ten years of development, Linux employs (and often enhances) UNIX's essential design features, concepts, standards, and performance. In this sense Linux is sometimes considered a UNIX clone, yet is probably more accurately described as "UNIX-like". Unless discussing specific differences between UNIX and Linux, one can be comfortable referring to them generically in similar terms, in most cases. Leading Linux distributions (versions, or flavours) are Red Hat, Debian, Mandrake, Caldera, SUSE, TurboLinux, and Conectiva. All are routinely inter-operable with each other. In fact, the latter four vendors have committed to offering a standardized version amongst them called United Linux. Global-scale hardware vendor support of Linux is provided by IBM, HP, SGI, Dell, and (most recently) Sun, among others.

    UNIX was born and raised in the milieu of high performance, highly connected computing. UNIX, the C computer language, and TCP/IP networking were co-developed in the 1970s and are intrinsically inseparable within the OS. Originally called the "UNICS Time Sharing System" when first developed by Ken Thompson and Dennis Ritchie et al of Bell Labs, UNIX was designed from the ground up to be a multi-tasking, multi-user computing environment. Linux picks up directly on that basis. Both continue to have their already viable capabilities methodically expanded.

    Microsoft Windows NT, Windows 2000, and Windows XP offer comparison to UNIX/Linux, but only at the lower end workstation/small server level. In the early 1990s, Windows NT was rolled out by Microsoft as a low cost, DOS-and-VMS-based alternative to UNIX and other operating systems, and Microsoft certainly did nothing to stop a growing perception of it as a potential "UNIX Killer".

    Engineers and developers recruited from Digital Equipment Corporation (DEC) brought excellent VMS-based expertise into the Microsoft fold. Dave Cutler, considered the "father" of NT and perhaps the key developer of DEC's VMS OS, had been working at DEC on a new OS, code-named "Mica", meant to be a successor to VMS. DEC's highest management were troubled that Cutler was approaching the design from a hardware platform-neutral operating stance. Central to the Mica design was the concept of the "Hardware Abstraction Layer" (HAL) that offered a uniform software platform regardless of the underlying computer machinery. In retrospect, and somewhat ironically, HAL was a foreshadowing of Sun's Java platform, which has as its main goal the equal functioning across disparate hardware architectures and which Microsoft has fought an ongoing crusade against. Doubly ironically, in a feat of twisted logic, Microsoft has subsequently cloned Java into their own "C#" platform (a crippled Java-clone running only on Microsoft Windows).

    DEC executives, worrying that their proprietary hardware-derived revenue would suffer as Mica was potentially adapted by non-DEC hardware vendors, terminated the Mica project. Cutler, now adrift, was quickly integrated into Microsoft and began work on the Windows NT project in 1988. Amid suspicions of intellectual property theft, DEC eventually sued Microsoft, citing that Cutler and his Mica team had actually continued the same project within Microsoft, culminating in the birth of the Windows NT OS. After Microsoft settled the case with DEC for $150 million, inside sources alleged that large quantities of NT's code (and even most of the programmer's comments) were identical to Mica's.

    Implicit in this tale is that no matter what the conditions were by which Microsoft had acquired this new project, they had the promise of a world-class OS in their hands. Had Microsoft's small team of NT developers outranked their marketers, corporate computing would likely be quite different today.

    The root of the problems with Windows NT and Microsoft's successor OSes in the corporate computing world is that NT grew out of a small team project (run by outsiders) from within Microsoft's only understanding of computing experience: a very limited, local, small scale desktop computing environment in which any connected computers were assumed to be completely trusted and the OS was not capable of real time multi-tasking or multi-user computing. While the NT team were ready to use their new OS to help Microsoft address those huge shortcomings in their product line, management insisted that the marketers' favourite features, no matter how technically unworthy, were to be grafted onto the NT OS. Under such pressure, the technical poisoning of the NT platform began.

    From the beginnings of NT, Microsoft's relentless, cyclical model of OS version replacement meant that even the best efforts to create a commercial-grade multi-user, multi-tasking OS were hindered by the sequential, marketing-driven loading of feature sets rather than intrinsically technical security, scalability, and stability improvements. Microsoft Windows 2000 and XP are directly built on Windows NT technology, as their bootup splash screens declare, and the fundamental development urges within Microsoft have not changed.

    Rewrites and redesigns of many of the most basic features of Microsoft OSes are mandated in the name of product differentiation, often stranding users of former versions with little or no continuing support before those versions could be brought by their manufacturer to a state of robustness, stability, and trustworthiness. The Microsoft marketing-driven product cycle took a new spin with Windows XP's late-2001 roll out, so quickly after the release of their Windows 2000 OS. To be clearer, Microsoft's product differentiation is not often correlated with product improvement. Commercial "rush to market" concerns have raised serious doubts of the quality of many Microsoft products, as discussed later in regards to security issues.

    Microsoft Windows NT was full of promise at its release, as was Windows 2000, and Windows XP. Many of the most celebrated NT marketing promises continued to go unfulfilled even as it was updated with its sixth service pack in 1999, while Windows 2000 provided improvements but similar, additional, and alternative difficulties. XP's feature set, weak security stance, and generally poor multi-tasking and multi-user support strain the use of the word improvement in comparison to Microsoft's previous OS offerings.

    Network & Overall Environment Compatibility

    Regarding medium and high-end servers, major corporate users have traditionally relied on UNIX to support commercial grade applications from vendors such as Oracle, Sybase, SAP, Lotus Notes, and others. Recently they have become increasingly comfortable with the Linux operating system on web servers (typically running the popular Open Source web server application called Apache), lower end servers for small business, local environments, and in the data center. In fact, the IT industry has displayed a significant willingness to migrate from low end UNIX machines to Linux due to its ease of substitution and significantly reduced cost over UNIX-vendor proprietary hardware as opposed to predominantly x86-based equipment. This transition is aided inestimably by the fact that both UNIX and Linux allow administrators to completely integrate capabilities and methodologies (based on universal, "open" technical standards and protocols) between and amongst these machines.

    Linux migration continues to fare quite well in the class of small-size servers and workstations often inhabited by Microsoft Windows NT, 2000 and XP. The Microsoft products are largely based on proprietary network protocols, file and data formats, and functionality. These typically thwart such vertical integration and compel IT leaders towards "lock in" to the closed, Microsoft-centric subset. Modifying a Windows Server or Workstation to comply with universal protocols can be difficult and costly.

    Experience of recent years shows that as personnel who have made their careers mostly or only in the Microsoft OS world have been promoted in corporate IT management structures, they have tended to see enterprise computing as an extension to Microsoft-based desktop computing, and thus have tended to address the requirements for medium and high-end server environments from an insufficiently broad skill set and vantage point. Microsoft's corporate strategy of trying to make administration of their OSes "easy" has undoubtedly opened the computing world to untold millions of people, but it unfortunately has also resulted in a pretense that the administration of critical corporate computing environments is an "easy" pursuit.

    A resulting Microsoft-centric computing ecosystem unjustifiably brings about the elevation of Microsoft OSes to an artificially high level in corporate environments, given the capabilities (or lack thereof) of the Microsoft server products. As is human nature, such mindsets and cultures are difficult to sway, even in the face of unflattering comparison (of which an abundance now exists).

    The net result to a network or enterprise computing situation is an artificial stratification of OSes based on their capability (or lack thereof) to inter-operate as one environment. The Windows OSes have bred a separate, less-flexible class of administrators, employing its own proprietary administrative means and requiring specific training apart from that of the UNIX/Linux camp. In almost every case, the quantity of administrators required to oversee a Microsoft-centric computing environment is much higher than for an equivalent UNIX/Linux environment, for many reasons outlined in this paper.



    The TCP/IP suite of protocols, created and developed on UNIX and administered by international standards bodies, are routinely reworked by Microsoft to thwart inter-operability with other operating systems. As an example, Microsoft Corporation appropriated and made extensions to an open, public network security protocol, Kerberos, that was developed at MIT and made available free of charge to the entire computing community. It is widely suspected by the designers of Kerberos and other experts that the extensions Microsoft introduced to their Windows 2000 implementation of Kerberos had no other conceivable purpose other than to render competitors' products incompatible with Microsoft workstations, in order to compel firms to adopt Microsoft rather than UNIX/Linux servers. As described later, the Microsoft "Active Directory" product is central to that company's plans for competing with over-arching directory services environments like the open standard LDAP and Samba, or Novell's "Netware", among others. Microsoft's fear of adhering completely to the open Kerberos standard is that outsiders could conceivably clone the centerpiece of their Active Directory offering: the "domain controller" server, and so the secret Kerberos extensions effectively thwart the sort of reverse engineering that would be required to do so.

    Additionally, given the recent surge in the use of free and open LDAP and Samba server environments for directory services to Windows clients, Microsoft has evidently not taken this threat to their Active Directory income idly: they've taken direct aim at LDAP itself with their "Outlook 2002" product, which has had the previously well working Outlook LDAP lookup interface redesigned in a way that causes great problems in information retrieval. Side-by-side tests of Outlook 2002 with previous Outlook versions, Netscape, and Mozilla LDAP clients show that the newest version now can take minutes to receive data that the others receive instantaneously. Researchers have identified that the newest interface has been poorly rewritten, and that attempts to force it to send proper LDAP queries do not work. Outlook 2002 is optimized to use Microsoft Active Directory, however.

    Microsoft's chosen TCP/IP-based networking methodology, CIFS-SMB, is an inherently inefficient protocol that requires significantly more network traffic for a given job than the UNIX protocol known as NFS (which is itself not an elegant paradigm). CIFS-SMB is a "blabber mouth", sending large amounts of easily captured information across the network. As Microsoft knows that the use of CIFS-SMB must eventually be brought to a close, they offer another network-based file system methodology, DFS (Distributed File System), which is very reminiscent of NFS in its use of "mounting" of remote file systems. It is sufficiently different from established practice that Windows administrators have been very reluctant to embrace it (because it's not "easy"?) and Microsoft has done seemingly little to popularize it. It would be most difficult to find a UNIX/Linux administrator who does not use NFS, and although early versions of NFS had stability difficulties, these troubles all but disappeared when NFS transitioned to Version 3 over the past few years.

    Since both NFS and CIFS-SMB are today's standards for file system inter-operability between computers, it is important to note that UNIX/Linux servers can speedily and efficiently operate in both protocols, while Windows servers routinely underperform in comparison tests and are not capable of providing NFS services without additional software and licenses. By using the Open Source (free of charge) server application called Samba, a UNIX/Linux machine can be made to appear as an NT, 2000, or XP file server in the Microsoft clients' Network Neighborhood interfaces. Owing to the underlying UNIX/Linux operating system's excellent data input/output performance, a Samba server routinely outperforms its equivalent Microsoft-only server in speed and reliability. Indeed, benchmark test results published in PC Magazine showed that the latest Samba software surpassed the performance of Windows 2000 by about 100 percent.

    Microsoft employs CIFS-SMB not only for file services but also for printing and central administration of computer naming and user/resource authentication between Microsoft servers and workstations in a logical environment called a "domain". Each domain requires a Primary Domain Controller (PDC), and for failover protection a Backup Domain Controller (BDC). Depending on the age and size of the Windows domain, a Primary and Secondary Windows Internet Naming Service (WINS) server may also be required (WINS is another Microsoft-only protocol not needed in any other computing environment).

    The latest releases of Samba allow a UNIX/Linux server to be a one-for-one replacement for a Windows PDC (indeed a three-for-one since it can also replace the WINS servers in the example above). For small IT organizations, the known durability of a UNIX/Linux Samba server for supporting the file and print needs of a Windows client pool is a very compelling alternative to the costly, less-stable Windows server option. For large scale organizations, leading UNIX platforms such as Sun Solaris and HP-UX run specially crafted Samba servers within their kernels (rather than as user-level applications) for SAN (Storage Area Network) and/or NAS (Network Attached Storage) support of large Windows client pools. A UNIX/Linux server can simultaneously support Windows clients via Samba while supporting UNIX/Linux clients via NFS, Macintosh clients via Netatalk (an Open Source alternative to Appletalk), and limited Novell Netware client support via MarsNWE (an Open Source Netware emulator, although Novell markets their own Linux-based Netware products that allow full functionality).

    Active Directory's reputation has been clouded by complaints of difficult administration and disappointing performance. Some leading-vendor hardware implementations running Windows 2000 with Active Directory have been found to be unable to support more than five levels beneath the directory root before unacceptable performance losses are encountered. Given that the intent of a typical directory services structure is to mimic or correspond to the actual corporate structure in place, five levels is clearly insufficient for most medium-sized businesses. By comparison, advanced LDAP implementations of the kind found on UNIX/Linux architecture can support scores of directory levels, while on supercomputer-class UNIX installations hundreds of levels of LDAP directory may be seen.

    For those IT organizations that need to maintain Microsoft-based directory services but would prefer the benefits of UNIX/Linux on the server end, the Samba team has announced that their upcoming Samba Version 3.0 will be fully Active Directory capable. It is highly unlikely that such a hardware/software combination would suffer the performance and security limitations of Microsoft's present offering. Of course it has been suggested that Microsoft client software will then be amended to thwart such usage, as was Outlook 2002 in regards to LDAP.

    Microsoft's .NET Framework and Office Future

    Windows XP is the vanguard OS for Microsoft's ".NET Framework" initiative, which ostensibly seeks to create an Internet-capable P2P (person to person) data and authentication sharing regime. Industry people have taken to labelling such offerings as "Web Services", but then cannot seem to precisely define what those are. Observers see .NET Framework as an attempt by Microsoft to privatize or commoditize the Internet. It has been commonly suggested that Microsoft's corporate goal, in the face of dwindling prospects of growth or sustainibility in the traditional PC marketplace, is to reshape itself from an operating system and software application vendor into an Internet-based commercial data, messaging, application software, and authentication clearinghouse. The .NET Framework initiative seeks to leverage Microsoft's access to the large pool of Windows desktop users connected to the Internet, eventually giving Microsoft a "cut" of all financial transactions conducted over .NET services.

    The Windows XP OS was released for workstations, and the Windows .NET Server OS will be targeted at server-class machines. Both OSes contain embedded, often compulsory application usage of such .NET Framework features as "Passport" user authentication, which is Microsoft's attempt at a "single sign on" technology for the Internet, in which users will not have to remember a multitude of usernames and passwords nor have to enter their personal information into different Web sites. Personal information such as contact lists and event calendars would also be stored within the Passport system, meaning that this technology will have to depend on a potentially colossal storehouse of private data. Users will be implored at many opportunities to sign up for a Passport membership and (if Microsoft has its way) will find that access to many popular Internet features will not be possible without one. This external marketing influence should be seen as an undesirable security and privacy threat to a self-controlled computing environment. Sensitive, proprietary user information and corporate data could find its way outside of local network control.

    Even if Microsoft's .NET Framework initiative does not saturate the computing industry as the much more corporately popular, Sun Microsystems-sponsored "Project Liberty" scheme competes directly against it, the amorphous suite of .NET Framework components is clearly being designed to operate primarily and optimally only on Microsoft OSes, so again Microsoft's plans would seem to argue against other OS environments. Multi-platform applicability is being ostensibly touted, yet Microsoft's own technical publicity and software development releases counter such promises and presage customer "lock in" to their products. Implicit is that Microsoft desires complete control over the standards and technologies by which businesses conduct their computer information operations.

    To wit, Microsoft executive Steve Ballmer has described how the Microsoft Office software suite is destined to become the conduit that will connect end users with .NET server and network resources. Office is being rewritten to behave as a "relational XML store" in which Microsoft intends to bring together, in Ballmer's words, "...one storage system instead of doing HTML, the file system, the database system, the e-mail system, they all today have their own search to manage, their own query, their own programmability, etc., and that is part of the thing that stands between getting Office to be a front end for payroll processing or decision support or workflow applications."

    Such a repository of private user data as that being planned to support .NET would be a lucrative prize for crackers and industrial spies. The security and privacy implications of trusting elemental messaging, authentication, financial, and data operations to Internet-based, Microsoft-controlled systems like .NET, Passport, and others are most serious, as discussed in the Security section further on in this white paper. Indeed, a proof-of-concept .NET virus called W32.DONUT was successfully written in January of 2002, capable of readily infecting key Microsoft .NET Intermediate Language (MSIL) files, that are the underpinning of .NET-computer intercommunication. The "real" viruses are presumably (or obviously?) to come.

    Platform and Software Inter-Operability

    Sensible IT leaders know that single-vendor environments are much less desirable than mixed shops. An argument promulgated by Microsoft marketers is that by staying with an all-Microsoft system, inter-operability and ease of use are guaranteed. This is a patent misconception. Competition improves the breed, yet Microsoft allows no such competition. By using secretive, internal software measures to close the door on the offerings of competitor vendors, Microsoft forces customers to use only their products, but they therefore cannot guarantee to customers that "best of breed" components are actually being provided.

    Additionally, Microsoft has taken unprecedented steps to glue their own server products together using their .NET Framework, such that even routine interchanges between Microsoft machines are tied into .NET's functionality. Troublingly, even if a customer was to find a competitive non-Microsoft product, the customer would be proscribed by the End User License Agreement (EULA) of the newest Service Packs for Windows 2000 and XP from using any benchmark tests as the premise for the switchover. The EULA reads: "You may not disclose the results of any benchmark test of the .NET Framework component of the OS Components to any third party without Microsoft's prior written approval.". Since corporate purchasing decisions require substantive rationale or justification prior to outlay, it would not be possible for testers to publicize their "proof" of a competitor's superiority without Microsoft's consent. It is difficult to believe that any such permission would be forthcoming from Redmond.

    The cellular design of Microsoft software packages (owing to the fact that over the years Microsoft purchased almost all of its leading software applications from other groups) almost guarantees the requirement of additional staff to specialize in their administration. Interfaces and procedures differ widely, from the minor (different pull down menus) to the major (substantially different administration interfaces and required skill sets). For a realistic example, such applications as Exchange, IIS, and SQL Server would comprise an entire computing application environment running only on Microsoft NT, 2000, or .NET servers. No competitive OSes are allowed to support those specific applications. As discussed further below, security vulnerabilities of those Microsoft OSes could cause all to be jeopardized simultaneously, effectively incapacitating a corporation's entire computing system (in several recent occurrences this has already happened to Microsoft-only server environments).

    The UNIX/Linux approach holds that versions, hardware, and capability can be accurately matched within a computing environment to specific situations. It is common to find corporate systems that employ several versions of UNIX/Linux simultaneously. For a realistic example, an enterprise computing environment may consist of a Lotus Domino server running on HP-UX, an Oracle database running on Solaris, while beside them a web server is running Apache on Linux, an e-commerce server is running on IRIX, and the entire system is backed up using AIX. Since all of those OSes use standard UNIX/Linux commands and protocols, automation and scripting of tasks is easily performed on all of them (often remotely) with little or no adaptation of tools or retraining of staff. While such a mixed environment may have its downside (confusion of arranging support from so many vendors, etc.) the IT leader would find such a competitive environment to be a budgetary and security boon.

    Of server-class operating systems, UNIX/Linux is exceptionally capable of supporting competing operating system software and network clients. The options available from Microsoft for Windows NT, 2000, and XP to support UNIX/Linux and other OS clients are minimal in comparison. Although both UNIX/Linux and Windows servers can emulate Netware and Appletalk servers, the underlying OS makes a great difference to the speed and stability of the server, and UNIX/Linux prevails.

    Performance

    Side-by-side comparisons on similar hardware show that UNIX/Linux is more stable, requires less administration, and is faster at read/write operations than Microsoft Windows NT and Windows 2000, even though the latter sports the latest enhancements to Microsoft's NTFS disk file system. Evidence for credible refutation of these statements is not to be found, given routine web searches and industry word-of-mouth. Major computer publications are forbidden by Microsoft advertising agreements and other licenses from offering such side-by-side tests. Indeed, only the most stilted, obtuse bench tests can be made to show a particular superiority of an NT or 2000 machine to a UNIX/Linux one of equal hardware specification.

    Real world, corporate evidence has shown conclusively that UNIX/Linux machines operate for months, if not years, without need for a reboot, and crashes are rare. This is not so with Windows NT, which is prone to disaster for no apparent reason even after having had all its Service Packs applied. Competent system administrators from both the Microsoft and non-Windows worlds recommended that Microsoft Windows NT be abandoned wherever and whenever possible. Evidence shows Windows 2000 to be quantifiably more stable than NT, but still not in the same league as UNIX/Linux. Windows 2000 also offers enhanced monitoring of failures and greater recovery tools than its predecessor. Microsoft promises that .NET server will offer greater kernel stability. As for XP, in Bill Gates' amusing words: "The error-reporting features built into Office XP and Windows XP are giving us an enormous amount of feedback..."

    At great additional cost, the less reliable nature of Microsoft OSes can be addressed by "clustering" them for High Availability (HA), which introduces a level of complexity beyond most Windows-trained administrators. Microsoft Datacenter is a clustering product available only from certain Microsoft-certified vendors. Based on the applications being locally utilized, a custom Windows 2000 solution is created specifically for an individual site. Each implementation is therefore different, and creation of the system often involves consultations between the vendor, engineers from the hardware manufacturer(s), and Microsoft. The resulting solution is captured permanently onto CDs and transported to the customer site for installation. For IT leaders, a certain loss of control is inevitable since local administrators are not allowed to make any changes, but it could be argued that this is a logical demand since the Datacenter agreement often guarantees 99.999% uptime.

    Since no changes are allowed to a Microsoft Datacenter implementation once it has been deployed, such as Service Packs or Hotfixes, there are troubling deficiencies in dependence on a remote organization for such critical updates, as discussed below in regards to security. Further, the vendor requires 24-hour monitoring of the Datacenter system, so the local network must provide complete remote access to an outside organization, which introduces network security concerns.

    Clustering of UNIX/Linux machines for HA is routine, but there is an additional category from which Microsoft OSes are notably absent: High Performance (HPC) clustering. HA and HPC clustering capability is available from mainstream UNIX vendors at high cost, configured by specially trained individuals for mission-critical environments such as in scientific research, e-commerce, and finance. Over the past few years a viable, low cost HPC alternative has come forward, based on Linux. When clustered into HA or HPC configurations, Linux provides a comparatively low cost (but at high productivity yield) platform for such purposes as failover web serving with Apache or for ultra high speed scientific research using Open Source Beowulf software. Indeed, many academic, scientific, and military organizations (such as the U.S. based Sandia and Livermore labs) have implemented Linux-based Beowulf clusters. Some schools have discovered the benefit of supercomputing at bargain-basement prices by creating Linux-based Beowulf clusters out of previously used pools of common desktop PCs (of course augmented with special networking devices to suit).

    Realistically, most commercial software applications would not require the type of benefit wrought by a supercomputer, but it is very illustrative to see how easily UNIX/Linux can adapt to this and many other roles while Microsoft OSes simply cannot. It must be noted that as with Microsoft Datacenter, high end UNIX/Linux HA and HPC systems are generally handled by outside organizations with very limiting agreements for local administration, although all necessary documentation and software to create a Linux/Beowulf cluster is freely available from the Open Source community.

    In medium to lower level environments not requiring HA or HPC performance, Windows OSes are not as reliable as UNIX/Linux ones. The vast majority of failures in Windows OSes are caused by software problems. Since UNIX/Linux servers are far less prone to such software difficulties, the remaining reliability concern for them is therefore their underlying hardware. Historically, such failures are few, and indeed microscopic in number in comparison to Windows OS software crashes. While the Windows-based world addresses OS instability by clustering a few or many discrete hardware servers for failover protection, a single UNIX/Linux server with dual or multi-pathed hardware can usually replace a cluster of Windows servers with no appreciable worry of down time. Stories of large quantities of Windows servers being replaced by one or a few of the UNIX/Linux type are increasingly common in today's IT world.

    The greater "down" time on Windows machines can be expensive in monetary cost, but also in productivity. Purposeful changes to the configuration of an NT machine by a qualified administrator often require a complete reboot, even for alterations that in the UNIX/Linux world would be considered routine and trivial on a constantly running machine. Windows 2000 and XP have significantly reduced the quantity of required reboots from NT, yet reboots are required at a much greater frequency than on UNIX/Linux. Availability of applications on Windows OSes is therefore less robust than on UNIX/Linux. Rebooting a machine to correct problems, a common Microsoft system administration technique, is a counter-productive and costly strategy. IT leaders of a daring bent may find adoption of Windows servers exhilarating. Worse, they may have already been conditioned to believe that the "culture of the reboot" is normal and acceptable. After all, rebooting is "easy". UNIX/Linux is modular in nature, adapting to changing conditions with aplomb. Real time administration and error resolution is routine, and an adminstrator can usually isolate and/or "kill" and restart offending programs without affecting the OS itself, or other programs. UNIX/Linux machines just keep going, and going, and going...

    Potential purchasers of Windows .NET Server must be aware that it is guesswork as to whether an organization can trust the new version, especially where financial computing or e-commerce is concerned. Unlike UNIX/Linux, a significant body of evidence of its kernel stability cannot be found. Such data for UNIX is commonplace, and the progress of Linux kernel development and testing is completely visible to all interested parties.

    One of the favorite commands of the UNIX/Linux administrator is uptime, which displays the time period since the last reboot. Unlike typical periods of days for Windows NT or a week or two for Windows 2000 and XP, the usual UNIX/Linux uptime period is measured in months, if not years. Undoubtedly some Windows administrators get above average uptimes with diligent care and attention, but invariably those machines are not heavily tasked.

    Complicating durability matters on Windows servers is the need to reboot as part of routine anti-virus update activities, meaning that even if the systems had not become unstable they would still require down time. As described later, UNIX/Linux machines are not susceptible to such virus difficulties. For reasons of durability and long service periods without interruption, such tasks as e-commerce and critical data manipulation are best serviced by UNIX/Linux and not by Microsoft Windows servers.

    With UNIX/Linux's superior performance and multitasking capabilities, the quantity of machines can be reduced when a conversion away from Microsoft Windows servers is undertaken. To use an analogy, pulling a wagon is always better with one horse than with two hundred chickens. Fewer machines means greater efficiency, less electricity, and speedier integration of new duties. As described below, large corporate IT organizations are finding this to be absolutely true.

    Hardware Issues

    Linux and certain UNIX varieties (FreeBSD, NetBSD, OpenBSD, SCO, Solaris-x86, etc.) operate speedily and efficiently on Intel-type (x86) hardware previously determined by Microsoft to be "obsolete" for many of its NT 4.0 versions (Internet Information Server, Back Office Server, other "Enterprise" server editions) and all of its Windows 2000 (and later) versions, while providing equal or similar functionality to the Microsoft products. The longer duration of usable machine life afforded by UNIX/Linux OSes means that the capital costs of the hardware can be spread over greater time, while hardware costs of migrating away from NT and 2000 are minimal or nil. For years, shortcomings in Windows OS stability have often, unjustifiably, been blamed on the x86 hardware platform rather than on the true culprit: the Windows OS itself. As proven by the Linux and UNIX alternatives, no doubt to the delight of x86-based system manufacturers Intel and AMD, there's nothing wrong with those systems that a decent OS won't cure.

    As was the case with Windows 2000 and XP, intent upgraders to Microsoft Windows .NET Server must be prepared for the cost of new hardware demanded of that version. Since Microsoft's operating systems routinely under-perform when compared to UNIX/Linux on the same hardware, the technical strategy of Microsoft has been to achieve greater performance through faster hardware. While this has historically helped the hardware vendors, the sensible budgetary considerations of IT departments are clearly not part of Microsoft's consideration.

    Costly RAM and CPU upgrades were obligatory for Windows 2000 and XP, meaning that newer, faster machines replaced suitably operational machines already in use. Once again Microsoft demands of already stretched IT budgets that current inventories of machines now providing acceptable (in Windows terms) performance are being rendered useless by new Microsoft Windows versions.

    This "forced hardware obsolescence" means that many of today's most commonplace hardware interface cards (network, disk, video, audio controllers) are not supported by the newer Microsoft operating systems. Unless drivers are made available from the interface card vendors themselves, many previously purchased hardware items are not configurable on new Windows versions. Driver sets must be submitted to Microsoft for their "signing" (official acknowledgement that the driver does indeed operate satisfactorily) in order to be used by newer Microsoft OSes. Unfortunately this means that hardware vendors don't tend to revisit their previous designs to update them, opting to develop new products instead. There is absolutely nothing wrong with such a process, except that the resulting products tend to have increasingly Windows-dependant features that may or may not be suitable to UNIX/Linux or other OSes. By following a Microsoft server purchasing plan, IT leaders are in fact locking themselves into a very limited, ever changing environment for hardware options.

    Linux and x86-based UNIX versions have typically had less support for brand new hardware interfaces in comparison to Microsoft OSes, since manufacturers have almost always written driver sets for the dominant Microsoft operating systems before all others. However, UNIX/Linux offers performance and capabilities today, on present hardware, that Microsoft only attempts to achieve on the newer, more expensive equipment. Also, it should be noted that the x86-based hardware industry, as exemplified by such companies as Creative Labs, Adaptec, 3Com, and many others, has rapidly moved to provide Linux support for their newest products over the past few years. For most of these hardware companies, Linux support for past and present products is now a given. OEM manufacturers like IBM, HP, and Dell now give equal standing to hardware development for Linux as for Windows.

    Organizations operating Windows NT on Compaq (nee DEC) Alpha servers and workstations were startled to find that Compaq and Microsoft abruptly discontinued development of Microsoft Windows NT 4.0 on that platform. When displaying the "scalability" of Windows NT 4.0 over several CPUs, Microsoft always touted its Alpha version. There was no public debate or discourse on this issue. The decision was taken, and customers were locked out due to their "lock in". Thankfully Alpha owners found that a version of Linux for Alpha is fully functional on those same machines. Of course, the conversion to Compaq's own True64 UNIX was also an attractive option. Nevertheless, the dangers of hardware "lock in" were amply illustrated.

    Subsequently, for the near future, Microsoft Windows OSes operate only on the Intel x86 platform. The Alpha hardware scenario was typical of the power Microsoft brings to bear upon IT organizations. Given the merger of HP and Compaq in early 2002, with the resultant narrowing of the x86 server hardware vendor field to basically three major entities (the others being Dell and IBM), it serves to illustrate that IT leaders who wish to maintain their own discretion over software and hardware choice should steer clear of the "lock in" nature of a Microsoft-only environment.

    While Microsoft Windows NT, 2000, and XP therefore operate only on x86 (Intel & clone) architecture, Linux is available at no cost for x86 but also for Sparc, UltraSparc, PowerPC, Alpha, iMac, PA-RISC, and several other hardware platforms. The look and feel of Linux running on such disparate architectures is uncannily similar regardless of hardware platform. For cross-platform organizations, Linux is a veritable boon.

    IBM has ported (adapted) Linux to its most powerful class of mainframe computers, allowing Linux to operate in configurations of either side-by-side (scores of Linux OSes running simultaneously on the same physical machine) or virtual (the Linux OS running in a memory-based "virtual machine" while supported by another OS). IBM has converted their line of "big iron" (the z series mainframes, i series workplace servers, and p series AIX-type) to operate 64-bit Red Hat and SUSE Linux.

    In late 2001 this scenario of Linux on the IBM mainframe was embraced by Finnish telecommunications company Sonera, which provides high-speed Internet access for 500,000 private and 70,000 corporate subscribers. Using a single IBM mainframe, Sonera was able to replace 60 different Unix and Windows NT servers. Key to the success is the running of the mainframe with 500 virtual servers running Linux software installed by Red Hat and SuSE. This example is not unique, as IBM has won over an ever increasing list of other big corporate players to Linux on Mainframe. In each case, most or all of their Windows servers were replaced or abandoned.

    IBM is not alone in their elevation of Linux to the top levels of corporate computing. The Intel and Hewlett Packard corporations made Linux one of the primary operating systems available on the Itanium 64-bit CPU chip. HP is touting its own special high-security version of Linux on that processor. Contributing their resources to the general Linux-Itanium effort were such organizations as Caldera, CERN, IBM, Red Hat, SGI, SuSE, TurboLinux and VA Linux Systems. As well, IBM, Hewlett Packard, and SGI offer software development packages for Linux on Itanium, and compatible versions of the GNU "GCC" compiler and "binutils" packages are being prepared. To that end, Intel, IBM, HP, Red Hat and SGI gathered at a summit meeting in June of 2001 to work on improving GCC for Itanium. It is most significant that since Itanium is not an extension to the 32-bit x86 platform and is not based on RISC technology (as 1990's-era UNIX hardware typically was) a new synthesis of expertise in the Itanium's new EPIC processor architecture included the Open Source community so fundamentally.

    On the immediate horizon is AMD's upcoming 64-bit Opteron (nee Hammer) CPU, which differs from the Itanium's EPIC approach by making extensions to the existing 32-bit x86 platform. This will require a recoding of OS software specifically for that new platform - something at which Linux excels. Microsoft spokespersons indicate that their focus is on Itanium for 64-bit and that offering Windows OSes on Opteron is thus not as great a priority. As with Itanium, Linux will be one of the first OSes to operate on Opteron. Further, Sun Microsystems is apparently moving to offer either AMD or Intel CPUs in a new range of one and two processor small servers to be run on Sun's own upcoming Linux distribution.

    Since Linux has already been running in 64-bit versions for years, essentially all of its applications have been or will be made available on the Itanium and Opteron architectures fairly rapidly. Microsoft released its first ever 64-bit OS, an Itanium-based version of Windows 2000 called "Advanced Server Limited Edition" in July of 2002 to almost no fanfare, since fewer than five applications (their own products: Exchange, IIS, SQL Server, etc.) have been signed by Microsoft as compliant. Opteron-based versions are apparently being considered. With the litany of security issues those Microsoft applications have shown, this is not promising.

    Direct Administration

    In almost all instances, except at additional licensing and purchase price, it is not possible to telnet (connect from a remote machine through a platform-independent interface to perform local tasks) into an NT Server or Workstation. Such remote administration is routine in UNIX/Linux, meaning that such servers often are run headless (without a monitor, keyboard, mouse, etc.) and without a GUI (Graphical User Interface) so that maximum energy can be devoted to its specified tasks at hand. Windows 2000 and XP have ironically addressed the telnet issue (to some extent) just at the time that the majority of UNIX/Linux administrators have embraced the vastly more secure SSH shell.

    Troublingly, Microsoft has never addressed the issue that Windows Servers cannot be operated headless. Those OSes require direct human contact and must always have a monitor and keyboard attached to them in order to be administered. Microsoft's only solution is their additional-cost Windows Terminal Server software, requiring an additional machine as well. So, to arrive at somewhat UNIX/Linux-like functionality, extra hardware, software, and license(s) must be purchased.

    As well, Windows OSes must constantly devote energy and memory space to keeping their GUIs operating. Owing perhaps to their background as a marketing company concerned with end user response to their GUI, Microsoft seems to have gaily insured in Windows NT, 2000, and most especially in XP that a colorful and musically attractive workplace is available.

    A UNIX/Linux server requires only a network or serial port connection and no GUI for its functionality. Microsoft servers can indeed be connected at additional purchase and license cost to non-Microsoft hardware devices that provide ostensibly headless operation, but the GUI problem cannot be circumvented.

    The actual GUI system administration interfaces have been somewhat changed between Windows server versions, yet still are not as affirmative or efficient as the administrative automation and hands-off capabilities possible with UNIX/Linux command line usage and scripting, which are essential administration paradigms. On the UNIX/Linux command line, capable administrators have online manual pages available as an immediate resource for assistance (however cryptic they may seem to be) while in the Windows GUI-based world, administrators are faced with immediate choices of options in each graphical interface, usually with little or no information available for making an informed decision. Help files are available, but often are not sufficiently robust for proper results. Some Windows administrators will opt (or have opted) for wrong or inadequate choices in order to merely proceed. In this manner, proof is given that Windows GUI administration is not "easier" than UNIX/Linux command line, as some erringly believe.

    While the attractiveness of the graphical workspace to the workstation end user is of undoubted consideration to some, it is generally not a desired quality when performance calibration of a production server-class machine is undertaken. UNIX/Linux versions do indeed allow fully capable GUI environments, but a wise system administrator appreciates the freedom to eschew such performance-draining environments and opts for the powerful UNIX/Linux command line environment. Divorcing Windows NT from its GUI environment is a task beyond most Microsoft-trained individuals, and is impossible in any newer Windows version.

    Automation

    UNIX/Linux administrators have long benefited from the highly robust and capable programming environment found in the operating system itself. Shell scripting has afforded the administrator the ability to greatly automate the processes of the system and its software programs, while ensuring continuity of procedures, reliable disaster recovery, and ease of remote administration. Microsoft administrators have had to make do with batch files, which are comparatively primitive holdovers from Microsoft's earlier DOS days. While some authors have undoubtedly created very clever solutions in that environment, the DOS batch file is thoroughly outperformed by the UNIX/Linux shell script's capabilities. The highly proprietary, GUI-based administration methodologies of Windows servers mean that they are unable to share the most routine automation scripts and procedures in the way that all UNIX and Linux versions do. After all, how can mouse clicks in a GUI environment be scripted in an easy to read, easy to edit text file that is portable to a floppy diskette or an email? They cannot.

    Windows XP is not POSIX compliant, meaning that it cannot achieve even basic similarity to the UNIX/Linux shell or command line environment. The POSIX subsystem that had been available in Windows NT and 2000 was removed from Windows XP and is now only available at additional cost for licenses.

    Microsoft has embraced VBScript, an offshoot of Netscape Corporation's JavaScript, as a means to achieve somewhat similar capabilities as UNIX/Linux scripting. However, VBScript has proven to be a source of major security concerns, since its inherent security posture is "permissiveness", which is to say that most or all routines are considered "trusted" unless otherwise configured. This is a complete reversal of the UNIX/Linux security paradigm, in which the default posture is "denial" until reconfigured as required. The upshot of Microsoft VBScript's weak security model is the apparent ease with which malevolent code can damage Microsoft operating system resources, as seen in a litany of world-wide virus assaults.

    Almost any custom system administration routines developed for operating a UNIX machine can be readily run on a Linux machine with minor adaptation. These routines will not work on the Microsoft platforms. Thus, mixed operating system environments continue to require administrators to create a set of universal "scripts" and a second set of "Microsoft-only" DOS batch files and/or VBScript-based procedures. As an industry paradigm or in regards to its efficacy in this role, VBScript has not fit the bill, and adoption has been very low.

    For wise administrators of mixed IT shops, Open Source software has greatly helped to bridge the gap. Perhaps the most suitable and popular multi-platform system administration language is Perl, the "Practical Extraction and Report Language" devised by Larry Wall and offered at no cost. Perl is no newcomer; it has been available for many years in UNIX and Linux OSes, and has been made portable to the Windows platform. Indeed, Microsoft themselves contributed greatly to Active State's porting of Perl to Windows. It would be difficult to find a computer language that has been made as extensible, flexible, and secure as Perl. Inevitably the popularity of Perl seeped into the Windows world, and so a great deal of cross-platform administration has been made possible based on the Open Source contributions of many people over the years. Compared to Perl, VBScript is seen as a poor option.

    Security

    The great amount of non-Microsoft security software and factory security patches for Microsoft OSes offers witness to the fundamental weakness of Microsoft Windows OSes in comparison to UNIX/Linux OSes. Such third-party anti-virus and security remedies are simply not needed on UNIX/Linux. To be fair, all operating systems can be vulnerable to malicious exploits, but it is a matter of wide degrees between UNIX/Linux and the Microsoft operating systems.

    Windows NT and 2000 are vulnerable to over sixty-five thousand known computer viruses (when I first authored this white paper in 1999 that amount was about forty thousand) while the number which affect UNIX/Linux can be counted on one hand and can only take hold of a system if the root user is operating that machine directly via a root login (a preposterous step on an active server, so almost unheard of). XP is vulnerable to the exact same viruses as its Microsoft predecessors, indicating that Microsoft clearly refuses to "harden" their newer Windows operating systems.

    In 1999 alone, the Chernobyl, Melissa, and Worm viruses caused untold damage to Microsoft computers worldwide due to the security vulnerabilities of Microsoft software on Microsoft operating systems. Again in 2000, the "I Love You" virus tore through Microsoft machines causing multiple millions of dollars of damage in lost data and productivity. As in 1999, UNIX and Linux machines were only indirectly affected, not because of any direct attack upon them from those viruses (there was none) but because UNIX/Linux machines are routinely used as servers to transport the very viruses and worms that are so lethal to Microsoft products.

    The July, 2001 onset of the "Code Red" and "SirCam" worm exploits and the September, 2001 assault of the "NIMDA" worm (estimated by computer security experts to have cost companies in the United States alone about $2 billion) once again demonstrated the inferior security of the Microsoft Windows NT and 2000 operating systems, as well as the Microsoft IIS server software package. Once again, UNIX and Linux prevailed with no direct negative effects. As linked below, the respected Gartner Group advocated the quick abandonment of Microsoft server products. Further, web server software manufacturer Halcyonsoft has discovered that IBM has implemented corporate policies that forbid Microsoft IIS servers on their Internet-facing Web sites. Some were nevertheless erroneously installed, and those IIS-based sites have already been hacked and defaced multiple times in 2001. Microsoft themselves admitted that one of their own servers had been infected with Code Red worm exploits three times during its first configuration on the Internet.

    Particularly blameworthy software from Microsoft includes its line of Exchange messaging servers (predominantly for email), IIS web server, and Outlook desktop email client (all three have been particularly cited for their security vulnerabilities, but most especially the IIS product). UNIX/Linux alternatives to those Microsoft products include iPlanet and Lotus Notes for messaging and web serving, as well as such Open Source options as the venerable, constantly improving Sendmail and Qmail email server applications and the popular Apache web server.

    A comment circulated throughout the IT-related media (print, web, etc.) attempts to suggest that the quantity of security exploits in Microsoft software can be explained away as a direct result of its popularity. This theory holds that as a given piece of software grows in adoption and common usage, so does the amount of illicit programming aimed against it. Since a sober review of security issues posted at such web sites as CERT displays no such pattern, this argument is to be ignored. Perhaps the simplest proof against it is in the arena of web server software, in which the Open Source Apache web server holds a 3-to-1 lead in usage over Microsoft's IIS server, yet the Microsoft product's rate and depth of predation (due to its improper security stance) dwarfs the relatively obscure Apache exploits, which have been quickly corrected in newer versions. Are there malicious programs that affect UNIX/Linux? Of course, but defending against them is routine, not difficult, and not costly.

    Regarding the above-mentioned Microsoft Datacenter high availability clustering system, local administrators have no recourse to security patches and Hotfixes in the event of a crisis, since Datacenter software is available only from the vendor and not changeable by the customer. In the case of the Code Red and Nimda worms, local administrators found their hands tied, and these mission critical installations were vulnerable for days, especially since most Datacenter systems use the insecure Microsoft IIS web server (that Nimda specifically attacked) to provide Terminal Server capabilities. The result is that a very expensive, mission critical Datacenter environment could sit idly for days as a solution is found. UNIX/Linux high availability environments are not susceptible to such attacks.

    Significant expense is incurred by Microsoft Windows workstation and server administrators in protecting and sterilizing their machines from attack. Presumably Microsoft is concerned that the end-user "experience" will be somehow lessened by a proper security stance. This could be seen as a deliberate passing of the security buck to local IT groups, meaning higher costs for software and labor to support Microsoft's low-security products. In fact, as of late 2002 Microsoft has taken to publicly floating the idea of charging their customers for security upgrades, adding to their already high TCO.

    All a potential purchaser can do is either accept the security risk of using Microsoft OSes immediately on their release or wait for Microsoft's patches and Hotfixes to eventually elevate the OS to an acceptably secure level. As mentioned above, Microsoft has historically replaced its OSes before they could achieve such a footing. Additionally, IT leaders must not forget that there is the additional cost of non-Microsoft security software such as that from Symantec, Macafee, and Sygate, among others, to be factored into budgets.

    Microsoft claims that Windows XP is its most secure operating system ever. On installation or first-time use, all XP computers must be registered as part of Microsoft's "Product Activation" requirement. Since the OS installed from CD is out of date before ever having been run (a problem faced by all commercial OSes) a network or modem connection is mandatory for the OS to be "properly" activated, using Microsoft's "Automatic Update" feature. In December of 2001, shortly after its release, Microsoft Windows XP was found to exhibit what can objectively be described as a gaping security hole: the default activation of Microsoft's "Universal Plug 'n Play" technology, which allows distant computers to connect directly to the local machine at system-level, bypassing any notion of authentication of the remote entity. Essentially, the unpatched machine is wide open to the most lethal of attack.

    Given that XP computers must "phone home" before they can be properly configured (or patched), this hole effectively leaves XP computers wide open to attack during that early phase of use, meaning that all new installations of XP have a period of severe vulnerability that cannot be prevented. The problem is so intrinsic to the Microsoft Windows XP OS that only a redesign of XP will cure it. Microsoft executives have faced questioning by the FBI regarding this matter, and Windows XP has been elevated to the top of the United States government's National Infrastructure Protection Center list of most serious national security issues. By the middle of January, it has become clear that the patches Microsoft has released to address the Universal Plug 'n Play hole and other issues have caused serious troubles of their own. IT professionals have reported that a barrage of Automatic Updates from Microsoft have rendered many key XP installations and peripheral devices unusable or unusable. To their credit, Microsoft is struggling to address security and performance issues with alacrity. To their discredit, Microsoft refuses to provide documentation or even indication of the contents of patches to IT professionals, making debugging and error correction of Automatic Update troubles almost impossible.

    Microsoft has added a new wrinkle to the "phone home" nature of their Windows XP OS with the release of its Product Use Rights (PUR) volume license agreement (published on Microsoft's web site and often altered without fanfare or due customer notice). In the section on Windows XP Professional, the "Internet-Based Services Components" section reads, in part: "You acknowledge and agree that Microsoft may automatically check the version of the Product and/or its components that you are utilizing and may provide upgrades or fixes to the Product that will be automatically downloaded to your Workstation Computer." Taken at its word, Microsoft has thus been given carte blanche to access Windows XP computers anywhere at any time. As "owner" of the machine(s) in question, the local IT organization therefore has had any options to prevent such access completely removed, and in fact has been placed into an extremely awkward position of having no control over privacy and security of XP computers. Implicit is that all XP computers must be accessible to Microsoft by network at all times in order to receive Automatic Update transactions. Advice: before ever committing to using XP, have your corporate legal personnel review the question of whether the disabling of Automatic Update would therefore void this Microsoft "agreement". History reminds us that voiding Microsoft user agreements can be a messy, unfortunate affair.

    On January 15, 2002, Microsoft head Bill Gates dispatched an extraordinary email titled "Trustworthy computing" to world wide Microsoft staff, but which was distributed copiously in press kits throughout the news media. In his missive, Gates implores his workers to elevate the notion of security to the "highest priority", becoming "more important" than any other part of the company's work. With unfortunate misdirection, Gates opines that this new security consciousness at Microsoft resulted in the wake of the tragic terrorist attacks of September 11, 2001 in New York City. This is transparently crass and misleading, as criticism of Microsoft's poor security efforts date back to the 1980s. Many in the IT industry infer that the Gates letter is an admission that security has never before had a high profile at Microsoft. The inevitable question is to what extent such a necessary mindset can be inculcated into the previously non-chalant ways of that organization. Further, this sea change in corporate focus must be done with haste, in the face of the ambitious, ill-conceived .NET strategy. Expediency before efficacy will be disastrous.

    It is difficult to trust Microsoft's new found security consciousness when even they do not use their own security products. SQL Labs, Microsoft's development group for SQL Server, has opted to forego the use of Microsoft's ISA (Internet Security and Acceleration) Server, which Redmond's marketers say defends networks against worms and viruses like Code Red, Nimda, and others. NetScreen's 500-series security appliance was given the task, even though Microsoft's web site continues to give reasons why ISA is best.

    As mentioned previously, the default security stance of Microsoft products is almost always "permissiveness", which is a potentially disastrous posture in today's computing environment. The UNIX/Linux approach, with its "locked down" default security stance, means that exploits are rare.

    Scalability

    Regarding proprietary UNIX hardware platforms such as IBM's RS6000, HP's 9000, SGI's MIPS, Sun's Ultras, and the previously mentioned Compaq Alphas, it is clear that these corporations have highly tuned their 32-bit and 64-bit operating systems to take maximum advantage of the underlying multi-processor hardware. All of these systems are "scalable", meaning that hardware additions and network adjustments can be made to significantly improve the abilities of the system.

    These mainstream UNIX OSes support multiple mainboards, so that the system can cumulatively bear from two to over a hundred processors, with technical breakthroughs occurring regularly to up those numbers. "High Availability" installations boast of almost 100% reliability, with built-in "failover" protection to prevent down time.

    Microsoft's 32-bit Windows NT and 2000 operating systems can operate only on certain, and few, x86-based multi-processor systems that are only fractionally as capable as the mainstream UNIX-vendor hardware varieties. The Microsoft-based systems are easily taxed by only a small number of concurrent kernel processes, and their scalability, in comparison to UNIX, is minor. For these reasons, Microsoft Windows OSes simply are nowhere to be seen in the Supercomputer and High Performance Computing classes of server. A small team of OS developers within Microsoft are rumoured to be working on a Supercomputing version of Windows, but no evidence of their existence is to be found.

    In semi-annual surveys of the top 500 fastest computers in the world, UNIX operating systems routinely appear on almost all. Tellingly, Linux configurations have been increasingly seen, even reaching the 44th position in one of the 1998 surveys and the 35th position in the most recent. No Microsoft operating system has ever appeared on the survey. This vacuum of Microsoft presence at the highest levels of computing is most informative.

    For many years, cinematography's animators and graphical renderers have known that only UNIX/Linux fits the bill for such computationally intense offerings as "Monsters, Inc." and "Toy Story". Microsoft products are falling from use. UNIX vendor SGI (Silicon Graphics) is renowned in the graphical arts community for the scalability of their "compute farms" used in movie animation rendering. Under the leadership of Rick Belluzzo, SGI expended a great deal of money and energy to offer a Microsoft Windows NT-based workstation for high performance graphics use of the kind required by the CAD/CAM, television, and motion picture industries. After continual disappointment with instability and poor performance, the now-financially crippled SGI abandoned Microsoft Windows NT. Belluzzo soon left SGI for an executive position at Microsoft in an unrelated field of responsibility, while SGI returned to their highly touted IRIX version of UNIX, also offering Linux with full support. As a result, movie animation houses have gone overwhelmingly to IRIX/Linux combinations and abandoned any Microsoft Windows products.

    Moviegoers who have seen the animated Hollywood film "Shreck" or the block buster feature film "Titanic" may be surprised to learn that the animated sequences were created entirely on high performance UNIX/Linux-based graphical rendering stations. The computerized animation in the popular 2001-02 film "Lord of the Rings" was rendered on UNIX servers and workstations running IRIX with Linux machines as the rendering compute farm.

    The world's largest financial and brokering firms, such as Merrill Lynch, Morgan Stanley Group Inc., The Goldman Sachs Group Inc., Credit Suisse First Boston Corp., and ETrade Group Inc. deploy Linux systems for data analysis and high performance computing, as well as for traditional file and print serving and in trading applications.

    The 32-bit Linux operating system, in its current state, has a scalability range that supercedes that of Microsoft Windows server OSes, and overlaps with lower range 32-bit UNIX installations. The 64-bit Linux operating system (for Compaq Alpha and Sun UltraSparc) continues to improve its scalability and challenge mainstream 64-bit UNIX versions, and the aforementioned 64-bit Itanium and Opteron versions of Linux are taking their place as an equal to UNIX versions on those platforms.

    Perceptions and "Mindshare"

    The perception of the Linux operating system is that it is a viable, powerful, markedly inexpensive option for extricating a computer system from Microsoft Windows NT, 2000, and XP. Further, it is an equally attractive and very compelling alternative to purchasing Windows .NET Server.

    Looking further beyond the migration, future plans and purchases could be based on Linux, but more importantly the expertise accumulated by system administration staff on Linux during the Windows recovery phase could possibly make transitioning to a fully featured major UNIX version appealing as an alternative. Interestingly, such a transition may be aided by the fact that development versions of upcoming mainstream UNIX types, such as Solaris 9 and HP-UX, show that the proper release versions will offer complete Linux APIs (application support and compliance interfaces). For example, moving applications from smaller Linux machines to larger UNIX machines will be somewhat effortless if they are not specifically tuned to a particular hardware environment (as few are).

    As a bulwark against typically massive and saturating Microsoft marketing campaigns, the web sites of the major UNIX vendors stand on their own for information about their specific UNIX and Linux offerings. Since Linux is Open Source software, it's lack of central authority or "ownership" seems to be preventing Microsoft from effectively countering its rise in popularity and acceptance.

    Indeed, the word-of-mouth popularity of Linux is unprecedented in the computer world. Specifically in relation to Microsoft, a major element of the Linux phenomena is that it has fostered a grass roots repudiation of "lock in" to the Microsoft agenda, with the above mentioned harsh effects upon IT budgets.

    The industry's largest computer software and hardware corporations now routinely offer 24 hour support for Linux on their specific platforms after purchase, while Linux is taking its place in computer education centers worldwide. An increasing number of academic institutions throughout the world utilize the Linux kernel for Computer Science degree study, guaranteeing that the next wave of software and hardware experts will be intimately familiar with Linux. Training vendors now routinely offer Linux familiarization and System Administration courses. Of major computer corporations, only Microsoft seems to be avoiding Linux.

    Software Issues and Compatibility

    Does Linux's non-centralized origin and "Open Source" development process make it in any way less credible or "safe" a choice than UNIX or Microsoft Windows NT, 2000, and XP? Clearly not, judging by the huge corporate embrace of Linux. The list of major computer corporations actively supporting Linux continues to grow at a steep climb. Included are such companies as IBM, Oracle, Hitachi, Dell, SGI, Mitsubishi, Sybase, Sony, AOL/Time Warner, Novell, Hewlett Packard, Intel, Fujitsu, Sun Microsystems, Informix, Adaptec, NEC, and many more.

    Based on this list of corporate luminaries, fears that major non-Microsoft software packages available for Windows NT and 2000 would not be available on Linux are proving to be mostly unwarranted as new product versions are released. The quantity of commercial and freeware software solutions for Linux continues to grow. Similarly, organizations that abandoned UNIX varieties of software for NT will find the return to UNIX/Linux increasingly easier as a growing number of consultants offer migration services.

    An added benefit of Linux's Open Source community of developers is that significant improvements, patches, fixes, upgrades, and new versions become available for usage (and individual modification) at a rate that makes Microsoft appear to be motionless. To discount or misunderstand the merits of such "peer review" of the Linux code base is to dispute the validity of academic inquiry and the scientific method, as practiced for centuries. Peer review of Linux and Open Source code in public is what guarantees its best qualities. Conversely, closed source code requires placing an inordinate trust in its vendor.

    Open Source code is freely available, and can be readily inspected prior to installation. By compiling such code locally, the program is optimized completely for the local machine. This is not so with Microsoft pre-packaged binary files, which are compiled distantly to generic PC specifications. The performance implications of this software difference can be significant. Since inspecting the contents of a canned Microsoft binary is impossible, a system administrator must accept that what is inside is beneficial. It has been demonstrated that what lurks inside Microsoft binaries often is not.

    Microsoft-produced binary files are known to often contain "easter eggs", which are internal sequences of code that are not central to the purpose of the original file. The best known of these is the flight simulator found in Microsoft's Excel spreadsheet program. Such Microsoft-distributed software is known derisively in the computer industry as "bloatware". There is no justification for wasting computer energy on such non-essential software, however entertaining some end users may find it to be.

    Windows 2000's installation software is approximately five times as large as that for Windows NT 4. The code itself contains almost double the lines (complexity) as NT 4.0 version, which is itself overly complex for its designated tasks (see GUI and "headless" problems above). As with previous OS versions, Microsoft continues its practice of not publishing critical APIs (critical programming interfaces used by software makers) for Windows XP, effectively disadvantaging the offerings of any company but their own. In fact, Microsoft is tying so many additional applications into XP that the complexity of the code now overwhelms comparison with NT. An old adage states that "the fewer the moving parts, the less there is to go wrong." Microsoft evidently feels that IT personnel have nothing to fear from XP's mushrooming code size and complexity.

    In contrast, Linux code is continually tuned and optimized by a public community of highly skilled developers from around the world. Their only agenda is to implement better functionality. There are no secrets to the code, no hidden APIs, no "easter eggs". In almost any comparable situation, Linux code is proportionally smaller than that of other OSes. Thus, it is very fast, easy to optimize for local needs, and free of junk. As traditional UNIX vendors embrace Linux, they are helping the Open Source GCC compiler team to ensure that local GCC-created binaries can achieve similar efficiency on their hardware as from their own proprietary compilers. As mentioned previously, the mainstream UNIX versions are highly tuned and calibrated by their manufacturers to suit their appropriate hardware.

    UNIX/Linux program setup files, often called "binaries", are markedly smaller than Microsoft ones, since they rely on previously installed operating system libraries. Libraries provide centralized information and code routines to all programs, allowing programmers to concentrate on the merits and performance of their own code while simply invoking tried-and-true library routines as needed.

    The Windows NT, 2000, and XP operating systems provide sparse library support. Microsoft programs must therefore install a selection of ".DLL" (Dynamic Link Library) files that contain their needed libraries. These .DLL file installations often overwrite critical, previously installed versions of the same file. A result is that de-installation of software from a Windows machine can cause the undesired deletion of critical files, accompanied by the loss of functionality of unrelated software, or even the operating system itself. Sometimes a complete re-installation of the operating system is required.

    A common Windows de-installation message warns the administrator of potential problems about multi-use files, yet does not provide any detailed information on the danger at hand. In the face of this, a typical administrator will opt to leave the "conflicting" files on the machine rather than risk the consequences of their removal. Thus, the Windows file system continues to accumulate more and more files that have no present use, taking up space that could be better used for data. Microsoft themselves have referred to their ongoing library problems as ".DLL Hell".

    Microsoft's solution to ".DLL Hell" was to borrow a common UNIX/Linux strategy for Windows 2000 and XP in which replaced files are held in storage so that a rollback to a previous configuration can be achieved if the newer addition causes problems. Given the less stable kernel of Windows OSes, this strategy is proving not to be as recoverable as on UNIX/Linux in real time. The effort by Microsoft in this area was nonetheless urgently required and generally appreciated.

    The UNIX/Linux programming paradigm does not allow for such de-installation difficulties. Programs are purposely installed into disk or file system areas that are safely removed from the critical operating system binaries and libraries put in place at OS installation time. Programs only make library "calls" and do not typically install their own.

    Also, since UNIX/Linux versions keep their configuration files in the /etc directory and allow tight access to those files, frequent tuning, cleanup, and backup of those files is simple and easy to automate in real time. What's more, these files are easily adapted to other similar machines, so portability of configuration attributes across an IT environment is quite easy. The Windows OSes instead rely on a single binary file called the "registry" to define the majority of the machine's configuration attributes. A registry file is not transferable to another machine. Registry contents are notoriously messy after several software changes, and inefficiencies eventually abound. Such companies as Symantec sell products that improve the efficiency and tidiness of registry files, begging the question as to why Microsoft themselves sell such a maintenance-dependent system that requires additional non-Microsoft software and licenses for its best performance.

    "Bang" for the Buck

    Mainstream UNIX versions, and Microsoft Windows NT, 2000, and especially XP, follow the procedure of bundling only select software options as part of the initial operating system purchase. Only superficially is there a clear comparison between Microsoft and UNIX OSes, because the quantity of basic capabilities available in any generic UNIX supercedes that of NT, 2000, and XP.

    While the quantity of installation software and lines of code surged in Microsoft Windows 2000 from NT, it was still lacking in basic server functionality in comparison with UNIX/Linux. XP will similarly not address many of these basic server functions without additional cost.

    Training costs for Microsoft-certified IT staff continue to be proportionally much higher than for UNIX/Linux. Typically an MCSE candidate will face as many as 12 courses (before taking exams) to achieve certification. UNIX/Linux candidates can achieve major vendor certification in less than 4 courses typically before taking exams, owing to the innate learnability and sensibility of UNIX/Linux. Some people suggest that UNIX/Linux is cryptic and difficult to learn. To the many who have easily learned it, especially in today's rapidly growing Linux user population, this ill-informed sentiment is amusing.

    In the area of bundled (available as part of the operating system installation) software, Linux is the absolute champion, no questions asked. For example, a full installation of Mandrake Linux 9.x can install over 1.5 gigabytes of world-class software free of charge. This software bundle ranges in functionality from advanced Internet servers (Apache Web Server, INN News Server, Sendmail, etc.) to print services, to remote network management, to database support, to fully featured office productivity suites, and more.

    As of late 2002, the computer industry media have caught on in a big way to an increasing number of independent studies showing that the TCO (Total Cost of Ownership) of Linux is not just lower than than that of Microsoft Windows, but in fact widely so. As Microsoft executives now clearly acknowledge, there is no better price point than free, and as IT leaders now have seen in their own environments, there is no better ratio of administrators-to-computer than with Linux. In this same TCO area, UNIX has taken a particular beating, owing to the inherently costly nature of proprietary hardware solutions. Taken in context with an organization's desire for uptime, capability, power, and flexibility, UNIX solutions are still popular and "worth it" in overall IT budgets, but the gap between these and Linux solutions is rapidly narrowing.


    On the Desktop

    Microsoft products, such as Windows 95, 98, NT, 2000, and XP, are entrenched as the most implemented workstation environment throughout the corporate computing world. A wholesale takeover of such desktop environments from the Microsoft products by any competitor is likely not imminent, despite great strides in the quality of such Open Source desktop products as KDE and Gnome for UNIX/Linux. Historically, the rise of Windows products at the expense of UNIX can be seen in the wide cost differential between the low-tech Intel x86-based PCs (both IBM and clone) and the proprietary, leading edge UNIX workstations. Additionally, Windows-platform-exclusive availability of office-related application software such as Microsoft Office cemented the use of the PC-based desktop, and Microsoft routinely changed (and changes) file formats of their applications to thwart compatibility with competitors' products, effectively locking IT organizations into a software and license upgrading cycle. For a variety of reasons well outside the scope of this white paper, Apple's Macintosh did not succeed in countering the PC desktop wave with the notable exception of having conquered the graphical design and musical industries.

    The venerable CDE (Common Desktop Environment) used by Solaris, HP-UX, AIX, SCO, and True64 was intended to provide a uniform user interface that would allow ease of application programming across those UNIX versions. Until now, the traditional desktop alternative of the UNIX world has not been embraced in the IT field even after many years of use. Differentiation ensued among CDE vendors, with unfortunate consequences to organizations that operated a mix of UNIX versions on their workstations. Further, as no UNIX-based alternative to Microsoft's Office application products was able to surmount the file format difficulties to become truly competitive, the UNIX desktop was not typically considered an option to Microsoft Windows.

    While CDE has a rather limited future, almost all of its vendors have announced that their updated UNIX versions will offer one (or both) of the KDE and Gnome desktops to match all leading Linux distributions. Today, since the KDE and Gnome products constantly improve, and most especially as customers are introduced to the stunning GUI environment of Apple's OS X, viable UNIX/Linux-based alternatives to the Microsoft Windows desktop environment are increasingly credible. KDE, in particular, evolves steadily, and its upcoming 3.1 version is quite comparable to OS X's GUI.

    Beyond the Microsoft license and software fee implications for freedom of choice, Microsoft's greatest sore point in the IT desktop environment owes to the productivity losses accumulated from lost time and lost files as a result of Windows OS instability. A reboot can be a costly affair, either at that particular moment or when pro-rated with all others over time. KDE and Gnome, operating on Linux-based PC workstations, are fast, inexpensive, stable, and secure. UNIX workstations on proprietary hardware, such as Sun's SunBlade models or HP's B, C, or J-class models, offer increasingly competitive pricing to x86-based PCs running Microsoft operating systems, while providing superior OS stability. Such machines excel at their traditional role, usually for design and CAD/CAM use. Apple's OS X machines are very stable and fast, while offering an eye-popping desktop environment.

    In the area of software applications for UNIX/Linux that are competitive with Windows-based desktop products, Sun's free-of-charge and ever-improving StarOffice suite (based on an Open Source platform called OpenOffice and built for UNIX/Linux and Windows OSes) is proving to be an acceptable package for organizations wishing to leave the Microsoft Office platform. Another product, Corel's WordPerfect Office Suite for Linux, contains functionality on a par with (or superior to) Microsoft's own Office Suite (of which no Linux-compatible version exists or is foreseen to be developed). Unfortunately, Corel discontinued development of this competitive product at the time of receiving a financial bail out of approximately $135 million from Microsoft. After this event, Corel has turned instead to producing products for Microsoft's .NET environment and Apple's OS X.

    Given the historic popularity of the Macintosh OS among people within the graphical arts and musical communities, it is a cinch that as the leading software applications of those industries are ported over to Apple OS X, these highly creative people will follow to the new UNIX-based platform.


    Clarifying Your Decision To Migrate

    Be alert to the realization that only Microsoft, among all major computer corporations, is avoiding a commitment to Linux. Rather, their marketing teams are now engaged in attacking it.

    Choose carefully whether to accept Microsoft's advertisements and information. Bear in mind that Microsoft chooses UNIX to handle the huge Internet loads upon their largest Internet services.

    Almost 95% of the servers at Microsoft Hotmail run Apache web servers on FreeBSD UNIX, according to Netcraft, while MSN runs on Solaris and Microsoft LinkExchange runs entirely on Apache/FreeBSD. Microsoft themselves appear to understand that it is not worthwhile to run their very own operating systems or web server software, after spending almost four years on the task of shifting to Windows NT (then 2000) and fifteen months after the first load balancing machines began to be shifted from FreeBSD. As the web site The Register has noted, more of Microsoft's FreeBSD servers just seem to pop up again and again. When confronted with public whimsey at evidence of their UNIX and Open Source software usage, Microsoft installed "interceptor" servers that prevent identification of these non-Microsoft machines from being displayed, as in the redirect of queries of LinkExchange servers to Microsoft's BCentral site. In February of 2002, IT industry watchers were amused to find that a web site run by Microsoft and Unisys to tout the "benefits" of Windows OSes over UNIX was, in fact, being operated on a UNIX server. It should also be noted that such Windows "booster" sites as Windows eXchange, WinOSCentral, and most PC hardware sites use the Open Source Apache web server on Linux or *BSD UNIX rather than Microsoft products.

    Perhaps a new Microsoft advertising slogan is in order: "At Microsoft, We Don't Use NT, 2000, or even XP. Why Should You?"



Summary and Conclusion



    It is in every IT leader's interest to install the "most capable at least expense" computer operating system. With the Linux operating system available free of charge as a complete, almost one-for-one replacement for Microsoft Windows in most routine server tasks, an existing Microsoft-based system can be readily converted with only incidental cost, but at great benefit in reduced TCO, license fees, and in no future obligations to the Microsoft licence subscription service. Beyond monetary obligations, IT organizations can now navigate towards maximum freedom in achieving their goals, perhaps for the first time ever. Vendor-imposed limitations, expenses, and inadequacies can often be made a thing of the past by employing Open Source software.

    Now more than ever, in the heightened global security climate, rote acceptance and purchasing of Microsoft products without proper, fair, objective research of Open Source, third party, and UNIX/Linux alternatives is inexcusable, and may even be grounds for concerns of misconduct and negligence, if not incompetence. It is an open, well documented fact that Microsoft products are especially inadequate in the area of trusted computing environments.

    For those organizations that invested (usually heavily) in the Microsoft Windows NT and 2000 operating systems, it should be unacceptible to continue justifying the large financial outlays needed to correct the security and capability inferiorities of the Windows OSes and applications. Herein lies the essential, "real world" superiority of the Linux operating system: a wide variety of benefits, improvements, and new capabilities is available at little or no extra cost. Linux is absolutely free of charge, with no licensing fees for any of its features, now or in the future. As mentioned above, the TCO of Linux environments is vastly lower than that of Microsoft Windows.

    In situations where Linux may not be an ideal replacement, UNIX may fit the bill. For tasks such as high volume e-commerce or finance, there is no better solution than a migration to a mainstream UNIX multi-processor machine. Initial cost may seem high, but cost benefits related to lesser system administration staff, fewer servers per assigned load, less electricity, and of course sheer processing capacity, may outweigh these concerns. In many high-end areas, Linux is not yet a match for a high-end UNIX version, but this gap continues to be erased rapidly.

    As a critical mass of applications becomes available for Apple's OS X line of workstations, the IT world will have to sit up and take strong notice of this stellar OS and its polished, friendly, and powerful user environment. For the administrator, all essential UNIX command line and scripting capabilities are present or easily added, and the end user experience is unparalleled by Windows XP.

    Migration is change, and change is stressful to personnel if not managed well. Deft navigation of available options is essential, and limited viewpoints can only result in limited results. Given the availability of a wide range of proven, secure, and powerful UNIX/Linux computing capabilities, today's IT leaders who made their careers primarily or only in the Microsoft OS world must look past personally limiting viewpoints, eschew "desktop-centric" mindsets, and ensure that technical merit prevails in decision-making.

    Computer system administrators will be called upon to learn new OS techniques as part of the migration, and this should not be problematic if the people are of good quality. Downsizing of staff will almost certainly occur in the migration from Windows OSes to UNIX/Linux (as this is one of the central benefits of migration to UNIX/Linux for an organization). IT leaders will thus need to be watchful for the truly knowledgeable and talented administrators in their midst who can thrive in new environments. Even so, training will go a long way to smoothing the technical transition, while a healthy component of good will is needed to suppress notions of elitism sometimes felt by practitioners of the superior UNIX/Linux computing environment towards their Windows-only colleagues. Such inter-personal concepts as "techno-elitism" are poison to efficacious migrations.

    Not mentioned in this white paper so far is a tired, old pro-Microsoft argument that by using Open Source products a user is left with no legal recourse should a flaw cause damage. To address this, one need only look at the Microsoft EULA: "In no event shall Microsoft Corporation or its suppliers be liable for any damages whatsoever including direct, indirect, incidental, consequential, loss of business profits or special damages, even if Microsoft Corporation or its suppliers have been advised of the possibility of such damages." Also not covered in this white paper is the penchant of Microsoft executives to have compared Open Source software with cancer, Communism, intellectual property theft, and other such maladies.

    For the reasons outlined in this white paper and supported by a great deal of factual data readily accumulating on the World Wide Web and in corporate computing environments, it is safe, sound, and productive to migrate to UNIX/Linux and say "No Thank You" to Microsoft Windows NT, 2000, and XP.







The opinions expressed in this white paper, unless expressly attributed to another person or source, are my own and do not represent those of any organization to which I am affiliated.


Click Here to jump to my main web page. (http://web.cuug.ab.ca/~leblancj)

Click Here to email me. (leblancj@cuug.ab.ca) NOTE: I do not provide free advice or free computer consulting. I reserve the right to answer email inquiries at my leisure (of which I have almost none).


"UNIX" is a registered trademark of The Open Group.

"Linux" is a registered trademark of Linus Torvalds.

"Windows NT", "Windows 2000", and "Windows XP" are U.S. registered trademarks of Microsoft Corp.




DongraysBryan → Why not MicroSoft Powered by Linux Validated by HTML Validator (based on Tidy)