UNIX Administration Course Version 1.0
UNIX admin #1
UNIX Administration Course
Copyright 1999 by Ian Mapleson BSc.
Detailed Notes for Day 1 (Part 1)Introduction to UNIX and
The UNIX operating system (OS) is widely used around the world, eg.
- The backbone of the Internet relies on UNIX-based systems and services, as
do the systems used by most Internet Service Providers (ISPs).
- Major aspects of everyday life are managed using UNIX-based systems, eg.
banks, booking systems, company databases, medical records, etc.
- Other 'behind the scenes' uses concern data-intensive tasks, eg. art,
design, industrial design, CAD and computer animation to real-time 3D
graphics, virtual reality, visual simulation & training, data
visualisation, database management, transaction processing, scientific
research, military applications, computational challenges, medical modeling,
entertainment and games, film/video special effects, live on-air broadcast
effects, space exploration, etc.
As an OS, UNIX is not often talked about in the media, perhaps
because there is no single large company such as Microsoft to which one can
point at and say, "There's the company in charge of UNIX." Most public talk is
of Microsoft, Bill gates, Intel, PCs and other more visible aspects of the
computing arena, partly because of the home-based presence of PCs and the rise
of the Internet in the public eye. This is ironic because OSs like MS-DOS,
Win3.1, Win95 and WinNT all draw many of their basic features from UNIX, though
they lack UNIX's sophistication and power, mainly because they lack so many key
features and a lengthy development history.
In reality, a great deal of the everyday computing world relies on UNIX-based
systems running on computers from a wide variety of vendors such as Compaq
(Digital Equipment Corporation, or DEC), Hewlett Packard (HP), International
Business Machines (IBM), Intel, SGI (was Silicon Graphics Inc., now just 'SGI'),
Siemens Nixdorf, Sun Microsystems (Sun), etc.
In recent years, many companies which previously relied on DOS or Windows
have begun to realise that UNIX is increasingly important to their business,
mainly because of what UNIX has to offer and why, eg. portability, security,
reliability, etc. As demands for handling data grow, and companies embrace new
methods of manipulating data (eg. data mining and visualisation), the need for
systems that can handle these problems forces companies to look at solutions
that are beyond the Wintel platform in performance, scalability and power.
Oil companies such as Texaco  and Chevron  are typical organisations
which already use UNIX systems extensively because of their data-intensive tasks
and a need for extreme reliability and scalability. As costs have come down,
along with changes in the types of available UNIX system (newer low-end designs,
eg. Ultra5, O2, etc.), small and medium-sized companies are looking towards UNIX
solutions to solve their problems. Even individuals now find that older 2nd-hand
UNIX systems have significant advantages over modern Wintel solutions, and many
companies/organisations have adopted this approach too .
This course serves as an introduction to UNIX, its history, features,
operation, use and services, applications, typical administration tasks, and
relevant related topics such as the Internet, security and the Law. SGI's
version of UNIX, called IRIX, is used as an example UNIX OS. The network of SGI
Indys and an SGI Challenge S server I admin is used as an example UNIX hardware
The course lasts three days, each day consisting of a one hour lecture
followed by a two hour practical session in the morning, and then a three hour
practical session in the afternoon; the only exceptions to this are Day 1 which
begins with a two hour lecture, and Day 3 which has a 1 hour afternoon lecture.
Detailed notes are provided for all areas covered in the lectures and
the practical sessions. With new topics introduced step-by-step, the practical
sessions enable first-hand familiarity with the topics covered in the lectures.
As one might expect of an OS which has a vast range of features, capabilities
and uses, it is not possible to cover everything about UNIX in three days,
especially the more advanced topics such as kernel tuning which most
administrators rarely have to deal with. Today, modern UNIX hardware and
software designs allow even very large systems with, for example, 64 processors
to be fully setup at the OS level in little more than an hour . Hence, the
course is based on the author's experience of what a typical UNIX user and
administrator (admin) has to deal with, rather than attempting to present a
highly compressed 'Grand Description of Everything' which simply isn't necessary
to enable an admin to perform real-world system administration on a daily basis.
For example, the precise nature and function of the Sendmail email system on
any flavour of UNIX is not immediately easy to understand; looking at the
various files and how Sendmail works can be confusing. However, in the author's
experience, due to the way UNIX is designed, even a default OS installation
without any further modification is sufficient to provide users with a fully
functional email service , a fact which shouldn't be of any great surprise
since email is a built-in aspect of any UNIX OS. Thus, the presence of email as
a fundamental feature of UNIX is explained, but configuring and customising
Sendmail is not.
History of UNIX
BTL = Bell Telephone Laboratories
GE = General Electric
WE = Western Electric
MIT = Massachusetts Institute of Technology
BSD = Berkeley Standard Domain
1957: BTL creates the BESYS OS for internal use.
1964: BTL needs a new OS, develops Multics with GE and MIT.
1969: UNICS project started at BTL and MIT; OS written using the B language.
1970: UNICS project well under way; anonymously renamed to UNIX.
1971: UNIX book published. 60 commands listed.
1972: C language completed (a rewritten form of B). Pipe concept invented.
1973: UNIX used on 16 sites. Kernel rewritten in C. UNIX spreads rapidly.
1974: Work spreads to Berkeley. BSD UNIX is born.
1975: UNIX licensed to universities for free.
1978: Two UNIX styles, though similar and related: System V and BSD.
1980s: Many companies launch their versions of UNIX, including Microsoft.
A push towards cross-platform standards: POSIX/X11/Motif
Independent organisations with cross-vendor membership
control future development and standards. IEEE included.
1990s: 64bit versions of UNIX released. Massively scalable systems.
Internet springs to life, based on UNIX technologies. Further
standardisation efforts (OpenGL, UNIX95, UNIX98).
UNIX is now 30 years old. It began life in 1969 as a combined
project run by BTL, GE and MIT, initially created and managed by Ken Thompson
and Dennis Ritchie . The goal was to develop an operating system for a large
computer which could support hundreds of simultaneous users. The very early
phase actually started at BTL in 1957 when work began on what was to become
BESYS, an OS developed by BTL for their internal needs.
In 1964, BTL started on the third generation of their computing resources.
They needed a new operating system and so initiated the MULTICS (MULTIplexed
operating and Computing System) project in late 1964, a combined research
programme between BTL, GE and MIT. Due to differing design goals between the
three groups, Bell pulled out of the project in 1969, leaving personnel in
Bell's Computing Science and Research Center with no usable computing
As a response to this move, Ken Thompson and Dennis Ritchie offered to design
a new OS for BTL, using a PDP-7 computer which was available at the time. Early
work was done in a language designed for writing compilers and systems
programming, called BCPL (Basic Combined Programming Language). BCPL was quickly
simplified and revised to produce a better language called B.
By the end of 1969 an early version of the OS was completed; a pun at
previous work on Multics, it was named UNICS (UNIplexed operating and Computing
System) - an "emasculated Multics". UNICS included a primitive kernel, an
editor, assembler, a simple shell command interpreter and basic command
utilities such as rm, cat and cp. In 1970, extra funding arose from BTL's
internal use of UNICS for patent processing; as a result, the researchers
obtained a DEC PDP-11/20 for further work (24K RAM). At that time, the OS used
12K, with the remaining 12K used for user programs and a RAM disk (file size
limit was 64K, disk size limit was 512K). BTL's Patent Department then took over
the project, providing funding for a newer machine, namely a PDP-11/45. By this
time, UNICS had been abbreviated to UNIX - nobody knows whose idea it was to
change the name (probably just phonetic convenience).
In 1971, a book on UNIX by Thompson and Ritchie described over 60 commands,
- b (compile a B program)
- chdir (change working directory)
- chmod (change file access permissions)
- chown (change file ownership)
- cp (copy a file)
- ls (list directory contents)
- who (show who is on the system)
Even at this stage, fundamentally important aspects of UNIX were
already firmly in place as core features of the overall OS, eg. file ownership
and file access permissions. Today, other operating systems such as WindowsNT do
not have these features as a rigorously integrated aspect of the core OS design,
resulting in a plethora of overhead issues concerning security, file management,
user access control and administration. These features, which are very important
to modern computing environments, are either added as convoluted bolt-ons to
other OSs or are totally non-existent (NT does have a concept of file ownership,
but it is isn't implemented very well; regrettably, much of the advice given by
people from VMS to Microsoft on how to implement such features was ignored).
In 1972, Ritchie and Thompson rewrote B to create a new language called C.
Around this time, Thompson invented the 'pipe' - a standard mechanism for
allowing the output of one program or process to be used as the input for
another. This became the foundation of the future UNIX OS development
philosophy: write programs which do one thing and do it well; write programs
which can work together and cooperate using pipes; write programs which support
text streams because text is a 'universal interface' .
By 1973, UNIX had spread to sixteen sites, all within AT&T and WE. First
made public at a conference in October that year, within six months the number
of sites using UNIX had tripled. Following a publication of a version of UNIX in
'Communications of the ACM' in July 1974, requests for the OS began to rapidly
escalate. Crucially at this time, the fundamentals of C were complete and much
of UNIX's 11000 lines of code were rewritten in C - this was a major
breakthrough in operating systems design: it meant that the OS could be used on
virtually any computer platform since C was hardware independent.
In late 1974, Thompson went to University of California at Berkeley to teach
for a year. Working with Bill Joy and Chuck Haley, the three developed the
'Berkeley' version of UNIX (named BSD, for Berkeley Software Distribution), the
source code of which was widely distributed to students on campus and beyond,
ie. students at Berkeley and elsewhere also worked on improving the OS. BTL
incorporated useful improvements as they arose, including some work from a user
in the UK. By this time, the use and distribution of UNIX was out of BTL's
control, largely because of the work at Berkeley on BSD.
Developments to BSD UNIX added the vi editor, C-based shell interpreter, the
Sendmail email system, virtual memory, and support for TCP/IP networking
technologies (Transmission Control Protocol/Internet Protocol). Again, a service
as important as email was now a fundamental part of the OS, eg. the OS uses
email as a means of notifying the system administrator of system status,
problems, reports, etc. Any installation of UNIX for any platform automatically
includes email; by complete contrast, email is not a part of Windows3.1, Win95,
Win98 or WinNT - email for these OSs must be added separately (eg. Pegasus
Mail), sometimes causing problems which would not otherwise be present.
In 1975, a further revision of UNIX known as the Fifth Edition was released
and licensed to universities for free. After the release of the Seventh Edition
in 1978, the divergence of UNIX development along two separate but related paths
became clear: System V (BTL) and BSD (Berkeley). BTL and Sun combined to create
System V Release 4 (SVR4) which brought together System V with large parts of
BSD. For a while, SVR4 was the more rigidly controlled, commercial and properly
supported (compared to BSD on its own), though important work occurred in both
versions and both continued to be alike in many ways. Fearing Sun's possible
domination, many other vendors formed the Open Software Foundation (OSF) to
further work on BSD and other variants. Note that in 1979, a typical UNIX kernel
was still only 40K.
Because of a legal decree which prevented AT&T from selling the work of
BTL, AT&T allowed UNIX to be widely distributed via licensing schemas at
minimal or zero cost. The first genuine UNIX vendor, Interactive Systems
Corporation, started selling UNIX systems for automating office work. Meanwhile,
the work at AT&T (various internal design groups) was combined, then taken
over by WE, which became UNIX System Laboratories (now owned by Novell). Later
releases included Sytem III and various releases of System V. Today, most
popular brands of UNIX are based either on SVR4, BSD, or a combination of both
(usually SVR4 with standard enhancements from BSD, which for example describes
SGI's IRIX version perfectly). As an aside, there never was a System I since WE
feared companies would assume a 'system 1' would be bug-ridden and so would wait
for a later release (or purchase BSD instead!).
It's worth noting the influence from the superb research effort at Xerox
Parc, which was working on networking technologies, electronic mail systems and
graphical user interfaces, including the proverbial 'mouse'. The Apple Mac arose
directly from the efforts of Xerox Parc which, incredibly and much against the
wishes of many Xerox Parc employees, gave free demonstrations to people such as
Steve Jobs (founder of Apple) and sold their ideas for next to nothing ($50000).
This was perhaps the biggest financial give-away in history .
One reason why so many different names for UNIX emerged over the years was
the practice of AT&T to license the UNIX software, but not the UNIX name
itself. The various flavours of UNIX may have different names (SunOS, Solaris,
Ultrix, AIX, Xenix, UnixWare, IRIX, Digital UNIX, HP-UX, OpenBSD, FreeBSD,
Linux, etc.) but in general the differences between them are minimal. Someone
who learns a particular vendor's version of UNIX (eg. Sun's Solaris) will easily
be able to adapt to a different version from another vendor (eg. DEC's Digital
UNIX). Most differences merely concern the names and/or locations of particular
files, as opposed to any core underlying aspect of the OS.
Further enhancements to UNIX included compilation management systems such as
make and Imake (allowing for a single source code release to be compiled on any
UNIX platform) and support for source code management (SCCS). Services such as
telnet for remote communication were also completed, along with ftp for file
transfer, and other useful functions.
In the early 1980s, Microsoft developed and released its version of UNIX
called Xenix (it's a shame this wasn't pushed into the business market instead
of DOS). The first 32bit version of UNIX was released at this time. SCO
developed UnixWare which is often used today by Intel for publishing performance
ratings for its x86-based processors . SGI started IRIX in the early 1980s,
combining SVR4 with an advanced GUI. Sun's SunOS sprang to life in 1984, which
became widely used in educational institutions. NeXT-Step arrived in 1989 and
was hailed as a superb development platform; this was the platform used to
develop the game 'Doom', which was then ported to DOS for final release. 'Doom'
became one of the most successful and influential PC games of all time and was
largely responsible for the rapid demand for better hardware graphics systems
amongst home users in the early 1990s - not many people know that it was
originally designed on a UNIX system though. Similarly, much of the development
work for Quake was done using a 4-processor Digital Alpha system .
During the 1980s, developments in standardised graphical user interface
elements were introduced (X11 and Motif) along with other major additional
features, especially Sun's Networked File System (NFS) which allows multiple
file systems, from multiple UNIX machines from different vendors, to be
transparently shared and treated as a single file structure. Users see a single
coherant file system even though the reality may involve many different systems
in different physical locations.
By this stage, UNIX's key features had firmly established its place in the
computing world, eg. multi-tasking and multi-user (many independent processes
can run at once; many users can use a single system at the same time; a single
user can use many systems at the same time). However, in general, the user
interface to most UNIX variants was poor: mainly text based. Most vendors began
serious GUI development in the early 1980s, especially SGI which has
traditionally focused on visual-related markets .
From the point of view of a mature operating system, and certainly in the
interests of companies and users, there were significant moves in the 1980s and
early 1990s to introduce standards which would greatly simplify the
cross-platform use of UNIX. These changes, which continue today, include:
- The POSIX standard , begun in 1985 and released in 1990: a suite of
application programming interface standards which provide for the portability
of application source code relating to operating system services, managed by
the X/Open group.
- X11 and Motif: GUI and windowing standards, managed by the X Consortium
- UNIX95, UNIX98: a set of standards and guidelines to help make the various
UNIX flavours more coherant and cross-platform.
- OpenGL: a 3D graphics programming standard originally developed by SGI as
GL (Graphics Library), then IrisGL, eventually released as an open standard by
SGI as OpenGL and rapidly adopted by all other vendors.
- Journaled file systems such as SGI's XFS which allow the creation,
management and use of very large file systems, eg. multiple terabytes in size,
with file sizes from a single byte to millions of terabytes, plus support for
real-time and predictable response. Note: Linux does not yet use a journaled
- Interoperability standards so that UNIX systems can seamlessly operate
with non-UNIX systems such as DOS PCs, WindowsNT, etc.
- X/Open eventually became UNIX International (UI), which competed for a
while with OSF. The US Federal Government initiated POSIX (essentially a
version of UNIX), requiring all government contracts to conform to the POSIX
standard - this freed the US government from being tied to vendor-specific
systems, but also gave UNIX a major boost in popularity as users benefited
from the industry's rapid adoption of accepted standards.
- X11 and Motif:
- Programming directly using low-level X11/Motif libraries can be
non-trivial. As a result, higher level programming interfaces were developed
in later years, eg. the ViewKit library suite for SGI systems. Just as 'Open
Inventor' is a higher-level 3D graphics API to OpenGL, ViewKit allows one to
focus on developing the application and solving the client's problem, rather
than having to wade through numerous low-level details. Even higher-level
GUI-based toolkits exist for rapid application development, eg. SGI's
- UNIX95, UNIX98:
- Most modern UNIX variants comply with these standards, though Linux is a
typical exception (it is POSIX-compliant, but does not adhere to other
standards). There are several UNIX variants available for PCs, excluding
Alpha-based systems which can also use NT (MIPS CPUs could once be used with
NT as well, but Microsoft dropped NT support for MIPS due to competition fears
from Intel whose CPUs were not as fast at the time ):
- Linux Open-architecture, free, global development, insecure.
- OpenBSD More rigidly controlled, much more secure.
- FreeBSD Somewhere inbetween the above two.
- UnixWare More advanced. Scalable. Not free.
There are also commercial versions of Linux which have additional features
and services, eg. Red Hat Linux and Calderra Linux. Note that many vendors
today are working to enable the various UNIX variants to be used with Intel's
CPUs - this is needed by Intel in order to decrease its dependence on the
various Microsoft OS products.
- Apple was the last company to adopt OpenGL. In the 1990s, Microsoft
attempted to force its own standards into the marketplace (Direct3D and
DirectX) but this move was doomed to failure due to the superior design of
OpenGL and its ease of use, eg. games designers such as John Carmack (Doom,
Quake, etc.) decided OpenGL was the much better choice for games development.
Compared to Direct3D/DirectX, OpenGL is far superior for seriously complex
problems such as visual simulation, military/industrial applications, image
processing, GIS, numerical simulation and medical imaging.
In a move to unify the marketplace, SGI and Microsoft signed a deal in the
late 1990s to merge DirectX and Direct3D into OpenGL - the project, called
Fahrenheit, will eventually lead to a single unified graphics programming
interface for all platforms from all vendors, from the lowest PC to the
fastest SGI/Cray supercomputer available with thousands of processors. To a
large degree, Direct3D will simply either be phased out in favour of OpenGL's
methods, or focused entirely on consumer-level applications, though OpenGL
will dominate in the final product for the entertainment market.
OpenGL is managed by the OpenGL Architecture Review Board, an independent
organisation with member representatives from all major UNIX vendors, relevant
companies and institutions.
- Journaled file systems:
- File systems like SGI's XFS running on powerful UNIX systems like
CrayOrigin2000 can easily support sustained data transfer rates of hundreds of
gigabytes per second. XFS has a maximum file size limit of 9 million
The end result of the last 30 years of UNIX development is
what is known as an 'Open System', ie. a system which permits reliable
application portability, interoperability between different systems and
effective user portability between a wide variety of different vendor hardware
and software platforms. Combined with a modern set of compliance standards, UNIX
is now a mature, well-understood, highly developed, powerful and very
Many important features of UNIX do not exist in other OSs such as WindowsNT
and will not do so for years to come, if ever. These include guaranteeable
reliability, security, stability, extreme scalability (thousands of processors),
proper support for advanced multi-processing with unified shared memory and
resources (ie. parallel compute systems with more than 1 CPU), support for
genuine real-time response, portability and an ever-increasing ease-of-use
through highly advanced GUIs. Modern UNIX GUIs combine the familiar use of icons
with the immense power and flexibility of the UNIX shell command line which, for
example, supports full remote administration (a significant criticism of WinNT
is the lack of any real command line interface for remote administration). By
contrast, Windows2000 includes a colossal amount of new code which will
introduce a plethora of new bugs and problems.
A summary of key UNIX features would be:
- Multi-tasking: many different processes can operate independently at once.
- Multi-user: many users can use a single machine at the same time; a single
user can use multiple machines at the same time.
- Multi-processing: most commercial UNIX systems scale to at least 32 or 64
CPUs (Sun, IBM, HP), while others scale to hundreds or thousands (IRIX,
Unicos, AIX, etc.; Blue Mountain , Blue Pacific, ASCI Red). Today,
WindowsNT cannot reliably scale to even 8 CPUs. Intel will not begin selling
8-way chip sets until Q3 1999.
- Multi-threading: automatic parallel execution of applications across
multiple CPUs and graphics systems when programs are written using the
relevant extensions and libraries. Some tasks are naturally non-threadable,
eg. rendering animation frames for movies (each processor computes a single
frame using a round-robin approach), while others lend themselves very well to
parallel execution, eg. Computational Fluid Dynamics, Finite Element Analysis,
Image Processing, Quantum Chronodynamics, weather modeling, database
processing, medical imaging, visual simulation and other areas of 3D graphics,
- Platform independence and portability: applications written on UNIX
systems will compile and run on other UNIX systems if they're developed with a
standards-based approach, eg. the use of ANSI C or C++, Motif libraries, etc.;
UNIX hides the hardware architecture from the user, easing portability. The
close relationship between UNIX and C, plus the fact that the UNIX shell is
based on C, provides for a powerful development environment. Today, GUI-based
development environments for UNIX systems also exist, giving even greater
power and flexibility, eg. SGI's WorkShop Pro CASE tools and RapidApp.
- Full 64bit environment: proper support for very large memory spaces, up to
hundreds of GB of RAM, visible to the system as a single combined memory
space. Comparison: NT's current maximum limit is 4GB; IRIX's current
commercial limit is 512GB, though Blue Mountain's 6144-CPU SGI system has a
current limit of 12000GB RAM (twice that if the CPUs were upgraded to the
latest model). Blue Mountain has 1500GB RAM installed at the moment.
- Inter-system communication: services such as telnet, Sendmail, TCP/IP,
remote login (rlogin), DNS, NIS, NFS, etc. Sophisticated security and access
control. Features such as email and telnet are a fundamental part of UNIX, but
they must be added as extras to other OSs. UNIX allows one to transparently
access devices on a remote system and even install the OS using a CDROM, DAT
or disk that resides on a remote machine. Note that some of the development
which went into these technologies was in conjunction with the evolution of
ArpaNet (the early Internet that was just for key US government, military,
research and educational sites).
- File identity and access: unique file ownership and a logical file access
permission structure provide very high-level management of file access for use
by users and administrators alike. OSs which lack these features as a core
part of the OS make it far too easy for a hacker or even an ordinary user to
gain administrator-level access (NT is a typical example).
- System identity: every UNIX system has a distinct unique entity, ie. a
system name and an IP (Internet Protocol) address. These offer numerous
advantages for users and administrators, eg. security, access control,
system-specific environments, the ability to login and use multiple systems at
- Genuine 'plug & play': UNIX OSs already include drivers and support
for all devices that the source vendor is aware of. Adding most brands of
disks, printers, CDROMs, DATs, Floptical drives, ZIP or JAZ drives, etc. to a
system requires no installation of any drivers at all (the downside of this is
that a typical modern UNIX OS installation can be large, eg. 300MB). Detection
and name-allocation to devices is largely automatic - there is no need to
assign specific interrupt or memory addresses for devices, or assign labels
for disk drives, ZIP drives, etc. Devices can be added and removed without
affecting the long-term operation of the system. This also often applies to
internal components such as CPUs, video boards, etc. (at least for SGIs).
In recent years, one aspect of UNIX that was holding it back from spreading
more widely was cost. Many vendors often charged too high a price for their
particular flavour of UNIX. This made its use by small businesses and home
users prohibitive. The ever decreasing cost of PCs, combined with the sheer
marketing power of Microsoft, gave rise of the rapid growth of Windows and now
WindowsNT. However, in 1993, Linus Torvalds developed a version of UNIX called
Linux (he pronounces it rather like 'leenoox', rhyming with 'see-books') which
was free and ran on PCs as well as other hardware platforms such as DEC
machines. In what must be one of the most astonishing developments of the
computer age, Linux has rapidly grown to become a highly popular OS for home
and small business use and is now being supported by many major companies too,
including Oracle, IBM, SGI, HP, Dell and others.
Linux does not have the sophistication of the more traditional UNIX
variants such as SGI's IRIX, but Linux is free (older releases of IRIX such as
IRIX 6.2 are also free, but not the very latest release, namely IRIX 6.5).
This has resulted in the rapid adoption of Linux by many people and
businesses, especially for servers, application development, home use, etc.
With the recent announcement of support for multi-processing in Linux for up
to 8 CPUs, Linux is becoming an important player in the UNIX world and a
likely candidate to take on Microsoft in the battle for OS dominance.
However, Linux will likely never be used for 'serious'
applications since it does not have the rigorous development history and
discipline of other UNIX versions, eg. Blue Mountain is an IRIX system
consisting of 6144 CPUs, 1500GB RAM, 76000GB disk space, and capable of 3000
billion floating-point operations per second. This level of system development
is what drives many aspects of today's UNIX evolution and the hardware which
supports UNIX OSs. Linux lacks this top-down approach and needs alot of work
in areas such as security and support for graphics, but Linux is nevertheless
becoming very useful in fields such as render-farm construction for movie
studios, eg. a network of cheap PentiumIII machines, networked and running the
free Linux OS, reliable and stable. The film "Titanic" was the first major
film which used a Linux-based render-farm, though it employed many other UNIX
systems too (eg. SGIs, Alphas), as well as some NT systems.
UNIX has come a long way since 1969. Thompson and Ritchie could never have
imagined that it would spread so widely and eventually lead to its use in such
things as the control of the Mars Pathfinder probe which last year landed on
Mars, including the operation of the Internet web server which allowed
millions of people around the world to see the images brought back as the
Martian event unfolded .
Today, from an administrator's perspective, UNIX is a stable and reliable
OS which pretty much runs itself once it's properly setup. UNIX requires far
less daily administration than other OSs such as NT - a factor not often taken
into account when companies form purchasing decisions (salaries are a major
part of a company's expenditure). UNIX certainly has its baggage in terms of
file structure and the way some aspects of the OS actually work, but after so
many years most if not all of the key problems have been solved, giving rise
to an OS which offers far superior reliability, stability, security, etc. In
that sense, UNIX has very well-known baggage which is absolutely vital to
safety-critical applications such as military, medical, government and
industrial use. Byte magazine once said that NT was only now tackling OS
issues which other OSs had solved years before .
Thanks to a standards-based and top-down approach, UNIX is evolving to
remove its baggage in a reliable way, eg. the introduction of the NSD (Name
Service Daemon) to replace DNS (Domain Name Service), NIS (Network Information
Service) and aspects of NFS operation; the new service is faster, more
efficient, and easier on system resources such as memory and network usage.
However, in the never-ending public relations battle for computer systems
and OS dominance, NT has firmly established itself as an OS which will be
increasingly used by many companies due to the widespread use of the
traditional PC and the very low cost of Intel's mass-produced CPUs. Rival
vendors continue to offer much faster systems than PCs, whether or not UNIX is
used, so I expect to see interesting times ahead in the realm of OS
development. Companies like SGI bridge the gap by releasing advanced hardware
systems which support NT (eg. the Visual Workstation 320 ), systems whose
design is born out of UNIX-based experience.
One thing is certain: some flavour of UNIX will always be at the forefront
of future OS development, whatever variant it may be.
- Texaco processes GIS data in order to analyse suitable sites for oil
exploration. Their models can take several months to run even on large
multi-processor machines. However, as systems become faster, companies like
Texaco simply try to solve more complex problems, with more detail, etc.
- Chevron's Nigerian office has, what was in mid-1998, the fastest
supercomputer in Africa, namely a 16-processor SGI POWER Challenge (probably
replaced by now with a modern 64-CPU Origin2000). A typical data set
processed by the system is about 60GB which takes around two weeks to
process, during which time the system must not go wrong or much processing
time is lost. For individual work, Chevron uses Octane workstations which
are able to process 750MB of volumetric GIS data in less than three seconds.
Solving these types of problems with PCs is not yet possible.
- The 'Tasmania Parks and Wildlife Services' (TPWS) organisation is
responsible for the management and environmental planning of Tasmania's
National Parks. They use modern systems like the SGI O2 and SGI Octane for
modeling and simulation (virtual park models to aid in decision making and
planning), but have found that much older systems such as POWER Series
Predator and Crimson RealityEngine (SGI systems dating from 1992) are
perfectly adequate for their tasks, and can still outperform modern PCs. For
example, the full-featured pixel-fill rate of their RealityEngine system
(320M/sec), which supports 48bit colour at very high resolutions (1280x2048
with 160MB VRAM), has still not been bettered by any modern PC solution.
Real-time graphics comparisons at http://www.blender.nl/stuff/blench1.html
show Crimson RE easily outperforming many modern PCs which ought to be
faster given RE is 7 years old. Information supplied by Simon Pigot (TPWS
- "State University of New York at Buffalo Teams up with SGI for
Next-Level Supercomputing Site. New Facility Brings Exciting Science and
Competitive Edge to University":
- Even though the email-related aspects of the Computing Department's SGI
network have not been changed in any way from the default settings (created
during the original OS installation), users can still email other users on
the system as well as send email to external sites.
- Unix history:
- A Brief History of UNIX:
- UNIX Lectures:
- Basic UNIX:
- POSIX: Portable Operating System Interface:
- "The Triumph of the Nerds", Channel 4 documentary.
- Standard Performance Evaluation Corporation:
- Example use of UnixWare by Intel for benchmark reporting:
- "My Visit to the USA" (id Software, Paradigm Simulation Inc., NOA):
- Personal IRIS 4D/25, PCW Magazine, September 1990, pp. 186:
- IndigoMagic User Environment, SGI, 1993 [IND-MAGIC-BRO(6/93)].
IRIS Indigo Brochure, SGI, 1991 [HLW-BRO-01 (6/91)].
"Smooth Operator", CGI Magazine, Vol4, Issue 1, Jan/Feb 1999, pp.
Digital Media World '98 (Film Effects and Animation Festival, Wembley
Conference Center, London). Forty six pieces of work were submitted to the
conference magazine by company attendees. Out of the 46 items, 43 had used
SGIs; of these, 34 had used only SGIs.
- "MIPS-based PCs fastest for WindowsNT", "MIPS Technologies announces
200MHz R4400 RISC microprocessor", "MIPS demonstrates Pentium-class RISC PC
designs", - all from IRIS UK, Issue 1, 1994, pp. 5.
- Blue Mountain, Los Alamos National Laboratory:
- "Silicon Graphics Technology Plays Mission-Critical Role in Mars
- "Silicon Graphics WebFORCE Internet Servers Power Mars Web Site, One
of the World's Largest Web Events"
- "PC Users Worldwide Can Explore VRML Simulation of Mars Terrain Via
- "Deja Vu All Over Again"; "Windows NT security is under fire. It's not
just that there are holes, but that they are holes that other OSes patched
years ago", Byte Magazine, Vol 22 No. 11, November 1997 Issue, pp. 81 to 82,
by Peter Mudge and Yobie Benjamin.
- VisualWorkstation320 Home Page: