Tuesday 11 August 2009

The Computer History Museum
Background

The Computer History Museum was formally established as a non-profit 501(c)(3) organization in 1999. The Museum is dedicated to the preservation and celebration of computing history. We are home to one of the largest international collections of computing artifacts in the world, encompassing physical objects, ephemera, photographs, moving images, documents and software.

In 1979, the Museum's founders Gordon and Gwen Bell opened an exhibit of many computing devices from their personal collection in the lobby of Digital Equipment Corporation. It wasn't long afterward that the Museum's name became The Digital Computer Museum and subsequently The Computer Museum.

In the fall of 1984, The Computer Museum opened to the public at Museum Wharf in Boston, sharing space with the Children's Museum. The Museum focused on computing history lectures, exhibiting highlights of the collection, and many children's educational activities including a two-story walk through computer, a virtual fish tank and a robot theatre.

In 1996, a significant portion of the Museum's collection moved to Mountain View, California. California was seen as an ideal place for an emphasis on collecting and the preservation of the Museum's growing collection. In Boston, the focus continued on exhibits. By 2000 the remaining collection from Boston arrived in Silicon Valley.

In 2001, the Museum shortened its name from The Computer Museum History Center to our current name. The next year, we moved into our permanent home with the purchase of a landmark building on Shoreline Boulevard in Mountain View. We currently have on-site three exhibits Visible Storage: Samples from the Collection, Mastering the Game: A History of Computer Chess and Innovation 101.

Under construction is the Museum's signature exhibit entitled "Computer History: the First 2000 years" due to open in late 2010.
Tours and Hours of Operation

The Museum's "Visible Storage" exhibit area is open for docent-led tours every week. Self-guided tours are also available in "Visible Storage" and the Museum's other two exhibits, "Innovation in the Valley" and "Mastering the Game: A History of Computer Chess." Tours are free and hours of operation are located on the Museum's website. Groups of 10 or more should call in advance to 650-810-1038 or e-mail tours@computerhistory.org.
Education

The Computer History Museum offers lectures, seminars and workshops with scholarly historical perspectives about and by the pioneers of the computing industry. The Museum's emphasis on preservation and education make it a unique resource for media researchers, historians, scientists, industry professionals and students of all ages. Research services are available to scholars by staff researchers as well as through the Museum's comprehensive website.
Lectures and Events

The Museum is proud to host monthly lectures with leading innovators; industry giants and opinion leaders; experts; engineers and scientists who share their personal stories and insights about developments, events and discoveries that have shaped our world. The Museum also frequently hosts other significant events that highlight and honor the history of computing and celebrate major industry milestones as they occur.
Fellow Awards

Each year over the past twenty years, the Museum has honored computing pioneers at an annual Fellow Awards Celebration. Museum Fellows are individuals who have made revolutionary and lasting contributions to the development of computing and often present a lecture or conduct a workshop and oral histories in connection with the awards program.
Publications

Articles about industry leaders and computing breakthroughs appear in the Museum's annual spring publication, Core, and the Museum's staff also publishes articles with both technical and historical content in complementary journals and magazines.
Future Plans

The Computer History Museum's efforts are underway to further expand its home in Silicon Valley. Future plans include additional and changing exhibits as well as theme rooms. Next up for spring 2008 is the debut of the Babbage Difference Engine #2, an extraordinary Victorian era computing device that no Victorian ever saw! It was finally built 153 years after Charles Babbage's (1791-1871) original design, and is rich in history. And then there is the Museum's 14,000 square-foot major and signature exhibit, the "Timeline of Computing History" due to open in the fall of 2009.

Wednesday 5 August 2009

Keystroke logging
Keystroke logging (often called keylogging) is the practice of noting (or logging) the keys struck on a keyboard, typically in a covert manner so that the person using the keyboard is unaware that their actions are being monitored. There are numerous keylogging methods, ranging from hardware- and software-based to electromagnetic and acoustic analysis.
[edit] Application

[edit] Software-based keyloggers
These are software programs that are designed to work on the target computer’s operating system. From a technical perspective there are four categories:
Hypervisor-based: The keylogger can theoretically reside in a malware hypervisor running underneath the operating system, which remains untouched, except that it effectively becomes a virtual machine. See Blue Pill for a conceptual example.
Kernel based: This method is difficult both to write and to combat. Such keyloggers reside at the kernel level and are thus difficult to detect, especially for user-mode applications. They are frequently implemented as rootkits that subvert the operating system kernel and gain unauthorized access to the hardware which makes them very powerful. A keylogger using this method can act as a keyboard driver for example, and thus gain access to any information typed on the keyboard as it goes to the operating system.
Hook based: Such keyloggers hook the keyboard using functionality provided by the operating system for applications to subscribe to keyboard events legitimately. The operating system notifies the keylogger each time a key is pressed and the keylogger simply records it.
Passive Methods: Here the coder uses operating system APIs like GetAsyncKeyState(), GetForegroundWindow(), etc. to poll the state of the keyboard or to subscribe to keyboard events. These are the easiest to write, but where constant polling of each key is required, they can cause a noticeable increase in CPU usage and can miss the occasional key. A more recent example simply polls the BIOS for preboot authentication PINs that have not been cleared from memory.
Form Grabber based logs web form submissions by recording the web browsing .onsubmit event functions. This records form data before it is passed over the internet and bypasses https encryption.

Remote access software keyloggers
These are local software keyloggers programmed with an added feature to transmit recorded data out of the target computer and make the data available to the monitor at a remote location. Remote communication is facilitated by one of four methods:
Data is uploaded to a website or an ftp account.
Data is periodically emailed to a pre-defined email address.
Data is wirelessly transmitted by means of an attached hardware system.
It allows the monitor to log into the local machine via the internet or ethernet and access the logs stored on the target machine.

Related Features
Software Keyloggers may be augmented with features that capture user information without relying on keyboard key presses as the sole input. Some of these features include:
Clipboard logging. Anything that has been copied to the clipboard can be captured by the program.
Screen logging. Screenshots are taken in order to capture graphics-based information. Applications with screen logging abilities may take screenshots of the whole screen, just one application or even just around the mouse. They may take these screenshots periodically or in response to user behaviours (for example, when a user has clicked the mouse). A practical application used by some keyloggers with this screen logging ability is to take small screenshots around where a mouse has just clicked; these defeat web-based keyboards (for example, the web-based screen keyboards that are often used by banks) and any web-based on-screen keyboard without screenshot protection.
Programmatically capturing the text in a control. The Microsoft Windows API allows programs to request the text 'value' in some controls. This means that some passwords may be captured, even if they are hidden behind password masks (usually asterisks).[2]

Hardware-based keyloggers
Main article: Hardware keylogger
Hardware-based keyloggers do not depend upon any software being installed as they exist at a hardware level in a computer system.
Firmware-based: BIOS-level firmware that handles keyboard events can be modified to record these events as they are processed. Physical access or root-level access is required to machine, and the software loaded into the BIOS needs to be created for the specific hardware that it will be running on.
Keyboard hardware: Hardware keyloggers are used for keystroke logging by means of a hardware circuit that is attached somewhere in between the computer keyboards and the computer, typically inline with the keyboard's cable connector. More stealthy implementations can be installed or built into standard keyboards, so that there's no device visible on the external cable. Both types logs all keyboard activity to their internal memory, which can subsequently be accessed, for example, by typing in a secret key sequence.[3] A hardware keylogger has an advantage over a software solution; because it is not dependent on installation on the target computer's operating system, it will not interfere with any program running on the target machine and also cannot be detected by any software. However its physical presence may be detected, for example if it's installed outside the case as an inline device between the computer and the keyboard. Some of these implementations have the ability to be controlled and monitored remotely by means of a wireless communication standard.[citation needed]

Wireless keyboard sniffers
These are passive sniffers collect packets of data being transferred from a wireless keyboard and its receiver. As encryption may be used to secure the wireless communications between the two devices, this may need to be cracked before if the transmissions are to be read.

Keyboard overlays
Criminals have been known to use keyboard overlays on ATM machines to capture people's PINs. Each keypress is registered by the keyboard of the ATM as well as the criminal's keypad that is placed over it. The device is designed to look like an integrated part of the machine so that bank customers are unware of its presence.

Acoustic keyloggers
Acoustic cryptanalysis can be used to monitor the sound created by someone typing on a computer. Each character on the keyboard makes a subtly different acoustic signature when stroked. It is then possible to identify which keystroke signature relates to which keyboard character via statistical methods such as frequency analysis. The repetition frequency of similar acoustic keystroke signatures, the timings between different keyboard strokes and other context information such as the probable language in which the user is writing are used in this analysis to map sounds to letters. A fairly long recording (1000 or more keystrokes) is required so that a big enough sample is collected.[citation needed]

Electromagnetic emissions
It is possible to capture the electromagnetic emissions of a keyboard, without being physically wired to it.

Optical surveillance
Not a keylogger in the classical sense, but an approach that can nonetheless be used to capture passwords or PINs. A strategically placed camera, such as a hidden surveillance camera at an ATM, can allow a criminal to watch a PIN or password being entered.

Cracking
Writing software applications for keylogging is trivial,[citation needed] and like any nefarious computer program, can be distributed as a trojan horse or as part of a virus. What is not trivial for an attacker however, is installing a covert keystroke logger without getting caught and downloading data that has been logged without being traced. An attacker that manually connects to a host machine to download logged keystrokes risks being traced. A trojan that sends keylogged data to a fixed e-mail address or IP address risks exposing the attacker.

Trojan
Young and Yung devised several methods for solving this problem and presented them in their 1997 IEEE Security & Privacy paper (their paper from '96 touches on it as well). They presented a deniable password snatching attack in which the keystroke logging trojan is installed using a virus (or worm). An attacker that is caught with the virus or worm can claim to be a victim. The cryptotrojan asymmetrically encrypts the pilfered login/password pairs using the public key of the trojan author and covertly broadcasts the resulting ciphertext. They mentioned that the ciphertext can be steganographically encoded and posted to a public bulletin board (e.g. Usenet).

Ciphertext
Young and Yung also mentioned having the cryptotrojan unconditionally write the asymmetric ciphertexts to the last few unused sectors of every writable disk that is inserted into the machine. The sectors remain marked as unused. This can be done using a USB token. So, the trojan author may be one of dozens or even thousands of people that are given the stolen information. Only the trojan author can decrypt the ciphertext because only the author knows the needed private decryption key. This attack is from the field known as cryptovirology.

Federal Bureau of Investigation
In 2000, the FBI used a keystroke logger to obtain the PGP passphrase of Nicodemo Scarfo, Jr., son of mob boss Nicodemo Scarfo.

Use in surveillance software
Some surveillance software has keystroke logging abilities and is advertised to monitor the internet use of minors. Such software has been criticized on privacy grounds, and because it can be used maliciously or to gain unauthorized access to users' computer systems.

Countermeasures
Countermeasures against keyloggers will vary depending on the type of keylogger in use.

Code signing
64-bit versions of Windows Vista and Server 2008 implement mandatory digital signing of kernel-mode device drivers, thereby restricting the installation of key-logging rootkits.

Anti-spyware
Many anti-spyware applications are able to detect keyloggers and quarantine, disable or cleanse them. These applications are able to detect keyloggers based on patterns in executable code, heuristics and keylogger behaviours (such as the use of hooks and certain APIs).
No software-based anti-spyware application can be 100% effective against all keyloggers. Also, software-based anti-spyware cannot defeat non-software keyloggers (for example, hardware keyloggers attached to keyboards will always receive keystrokes before any software-based anti-spyware application, rendering the anti-spyware application useless).
However, the particular technique that the anti-spyware application uses will influence its potential effectiveness against software keyloggers. As a general rule, anti-spyware applications with higher privileges (see Ring (computer security)) will defeat keyloggers with lower privileges. For example, a hook-based anti-spyware application cannot defeat a kernel-based keylogger (as the keylogger will receive the keystroke messages before the anti-spyware application), but it could potentially defeat hook and API-based keyloggers.

Firewall
Enabling a firewall does not stop keyloggers per se, but can prevent the remote installation of key logging software, and possibly prevent transmission of the logged material over the internet if properly configured.

Network monitors
Network monitors (also known as reverse-firewalls) can be used to alert the user whenever an application attempts to make a network connection. This gives the user the chance to prevent the keylogger from "phoning home" with his or her typed information.

Automatic form filler programs
Automatic form-filling programs may prevent keylogging by removing the requirement for a user to type personal details and passwords using the keyboard. Form fillers are primarily designed for web browsers to fill in checkout pages and log users into their accounts. Once the user's account and credit card information has been entered into the program, it will be automatically entered into forms without ever using the keyboard or clipboard, thereby reducing the possibility that private data is being recorded. However someone with physical access to the machine may still be able to install software that is able to intercept this information elsewhere in the operating system or while in transit on the network. (Transport Layer Security prevents the interception of data in transit by network sniffers and proxy tools.)

Alternative keyboard layouts
Most keylogging hardware/software assumes that a person is using the standard QWERTY keyboard layout, so by using a layout such as Dvorak, frequency analysis is required to determine the mapping of captured keystrokes. For additional security, custom keyboard layouts can be created using tools like the Microsoft Keyboard Layout Creator.

One-time passwords (OTP)
Using one-time passwords is completely keylogger-safe because the recorded password is always invalidated as soon as it's used. This solution is useful if you are often using public computers where you can't verify what is running on them. One-time passwords also prevent replay attacks where an attacker uses the old information to impersonate. One example is online banking where one-time passwords are implemented to protect accounts from keylogging attacks as well as replay attacks.

Smart cards
Because of the integrated circuit of smart cards, the cards themselves are not affected by keylogger and other logging attempts. A smart card can process the information and return back a unique challenge every time you login. The information cannot usually be used to login again. However smartcard readers and their associated keypads for PIN entry are still vulnerable to key logging.

On-screen keyboards

Program-to-program (non-web) keyboards
It is sometimes said that a third-party (or first party) on-screen keyboard program is a good way to combat keyloggers, as it only requires clicks of the mouse. However, this is not always true.
Most on screen keyboards (such as the onscreen keyboard that comes with Microsoft Windows XP) send keyboard event messages to the external target program to type text. Every software keylogger can log these typed characters sent from one program to another. Additionally, some programs also record or take snapshots of what is displayed on the screen (periodically, and/or upon each mouse click).
However, there are some on-screen keyboard programs that do offer some protection, using other techniques described in this article (such as dragging and dropping the password from the on-screen keyboard to the target program).

Web-based keyboards
Web-based on-screen keyboards (written in JavaScript, etc.) may provide some degree of protection. At least some commercial keylogging programs do not record typing on a web-based virtual keyboard. (Screenshot recorders are a concern whenever entire passwords are displayed; fast recorders are generally required to capture a sequence of virtual key presses.)
Notably, the game MapleStory uses, in addition to a standard alphanumeric password, a 4-digit PIN code secured by both on-screen keyboard entry and a randomly changing button pattern; there is no real way to get the latter information without logging the screen and mouse movements; another MMORPG called RuneScape makes a similar system available for players to protect their in-game bank accounts.
Many banks uses the web-based screen keyboard to prevent key logging. HSBC is one of them.

Anti-keylogging software
Keylogger detection software is also available. Some of this type of software use "signatures" from a list of all known keyloggers. The PC's legitimate users can then periodically run a scan from this list, and the software looks for the items from the list on the hard-drive. One drawback of this approach is that it only protects from keyloggers on the signature-based list, with the PC remaining vulnerable to other keyloggers.
Other detection software doesn't use a signature list, but instead analyzes the working methods of many modules in the PC, allowing it to block the work of many different types of keylogger. One drawback of this approach is that it can also block legitimate, non-keylogging software. Some heuristics-based anti-keyloggers have the option to unblock known good software, but this can cause difficulties for inexperienced users.

Speech recognition
Similar to on-screen keyboards, speech-to-text conversion software can also be used against keyloggers, since there are no typing or mouse movements involved. The weakest point of using voice-recognition software may be how the software sends the recognized text to target software after the recognition took place.

Handwriting recognition and mouse gestures
Also, many PDAs and lately Tablet PCs can already convert pen (also called stylus) movements on their touchscreens to computer understandable text successfully. Mouse gestures utilize this principle by using mouse movements instead of a stylus. Mouse gesture programs convert these strokes to user-definable actions, among others typing text. Similarly, graphics tablets and light pens can be used to input these gestures, however, these are getting used less commonly everyday.
The same potential weakness of speech recognition applies to this technique as well.

Macro expanders/recorders
With the help of many Freeware/Shareware programs, a seemingly meaningless text can be expanded to a meaningful text and most of the time context-sensitively, e.g. "we" can be expanded "en.Wikipedia.org" when a browser window has the focus. The biggest weakness of this technique is that these programs send their keystrokes directly to the target program. However, this can be overcome by using the 'alternating' technique described below, i.e. sending mouse clicks to non-responsive areas of the target program, sending meaningless keys, sending another mouse click to target area (e.g. password field) and switching back and forth.

Window transparency
Using many readily available utilities, the target window could be made temporarily transparent, in order to hinder screen-capturing by advanced keyloggers.[citation needed] Although not a fool-proof technique against keyloggers on its own, this could be used in combination with other techniques.

Non-technological methods
Some keyloggers can be fooled by alternating between typing the login credentials and typing characters somewhere else in the focus window. Similarly, a user can move their cursor using the mouse during typing, causing the logged keystrokes to be in the wrong order e.g. by typing a password beginning with the last letter and then using the mouse to move the cursor for each subsequent letter. Lastly, someone can also use context menus to remove, copy, cut and paste parts of the typed text without using the keyboard.
Another very similar technique utilizes the fact that any selected text portion is replaced by the next key typed. E.g. if the password is "secret", one could type "s", then some dummy keys "asdfsd". Then these dummies could be selected with mouse, and next character from the password "e" is typed, which replaces the dummies "asdfsd".



Branching (software)
Branching, in revision control and software configuration management, is the duplication of an object under revision control (such as a source code file, or a directory tree) so that modifications can happen in parallel along both branches.
Branches are also known as trees, streams or codelines. The originating branch is sometimes called the parent branch, the upstream branch (or simply upstream - especially if the branches are maintained by different organisations or individuals), or the backing stream. Child branches are branches that have a parent; a branch without a parent is referred to as the trunk or the mainline.
In some distributed revision control systems, such as Darcs, there is no distinction made between repositories and branches - so in these systems, fetching a copy of a repository is equivalent to branching.
Branching also generally implies the ability to later merge or integrate changes back onto the parent branch. Often the changes are merged back to the trunk, even if this is not the parent branch. A branch not intended to be merged (e.g. because it has been relicensed under an incompatible license by a third party, or it attempts to serve a different purpose) is usually called a fork.
Branches are created for various reasons. These are covered in depth in the paper "Streamed Lines: Branching Patterns for Parallel Software Development" by Brad Appleton, Stephen Berczuk, Ralph Cabrera, and Robert Orenstein.

Development branch
A development branch or development tree of a piece of software is a version that is under development, and has not yet been officially released. In the open source community, the notion of release is typically metaphorical, since anyone can usually check out any desired version, whether it be in the development branch or not. Often, the version that will eventually become the next major version is called the development branch. However, there is often more than one subsequent version of the software under development at a given time.
Some revision control systems have specific jargon for the main development branch - for example, in CVS, it is called the "MAIN". A more generic term is "mainline".

Shadow
In cvc, an open source package building system (incorporating a simple revision control system for packages) produced by rPath, a shadow is a type of branch which is designed to "shadow" changes made in the upstream branch, to make it easier to maintain small changes

Operating system
An Operating System (commonly abbreviated to either OS or O/S) is an interface between hardware and user; an OS is responsible for the management and coordination of activities and the sharing of the resources of the computer. The operating system acts as a host for computing applications that are run on the machine. As a host, one of the purposes of an operating system is to handle the details of the operation of the hardware. This relieves application programs from having to manage these details and makes it easier to write applications. Almost all computers (including handheld computers, desktop computers, supercomputers, video game consoles) as well as some robots, domestic appliances (dishwashers, washing machines), and portable media players use an operating system of some type. Some of the oldest models may however use an embedded operating system, that may be contained on a compact disk or other data storage device.
Operating systems offer a number of services to application programs and users. Applications access these services through application programming interfaces (APIs) or system calls. By invoking these interfaces, the application can request a service from the operating system, pass parameters, and receive the results of the operation. Users may also interact with the operating system with some kind of software user interface (UI) like typing commands by using command line interface (CLI) or using a graphical user interface (GUI, commonly pronounced “gooey”). For hand-held and desktop computers, the user interface is generally considered part of the operating system. On large multi-user systems like Unix and Unix-like systems, the user interface is generally implemented as an application program that runs outside the operating system. (Whether the user interface should be included as part of the operating system is a point of contention.)
Common contemporary operating system families include BSD, Darwin (Mac OS X), GNU/Linux, SunOS (Solaris/OpenSolaris), and Windows NT (XP/Vista/7). While servers generally run Unix or some Unix-like operating system, embedded system markets are split amongst several operating systems
History
[edit] In the beginning
Proprietary operating systems were made to sell the company's hardware. Without system software (compilers and operating systems), a budding hardware developer had great difficulty launching a computer; the availability of operating systems not tied to a single hardware supplier - such as Digital Research's CP/M for microcomputers, and Unix for larger computers - greatly transformed the computer industry; someone with an innovative idea could easily start producing hardware on which buyers could use standard software. In 1969-70, UNIX first appeared on the PDP-7 and later the PDP-11. It soon became capable of providing cross-platform time sharing using preemptive multitasking, advanced memory management, memory protection, and a host of other advanced features. UNIX soon gained popularity as an operating system for mainframes and minicomputers alike. Unix was inspired by Multics, as were several other operating systems, such as Data General's AOS-VS, and IBM's addition of such concepts as subdirectories to PC DOS in version 2.0.
Microsoft bought QDOS from Seattle Computer Products, a very simple diskette operating system somewhat similar to CP/M, to create an operating system, PC DOS, for the launch of the IBM PC, under a deal with IBM where Microsoft could still sell the operating system as MS DOS for non-IBM computers. Microsoft produced odd-numbered major version numbers while IBM was responsible for even revision numbers (2.0, 2.1, 4.0, etc) of the code base until version 6. There was very little difference between MS-DOS and PC-DOS, one example being the inclusion of GW-BASIC with MS-DOS (because some BASIC code in IBM PC ROMs was not legally allowed to be put into non-IBM computers). MS-DOS and PC-DOS soon became known simply as "DOS" (the term is now usually taken to also include other "DOSes" such as DR-DOS and FreeDOS, but it should not be confused with the command prompt program within some operating systems, COMMAND.COM). Although MS-DOS could be tailored to hardware significantly different to IBM's PC, it soon became common for hardware vendors to make their equipment as compatible as possible with the IBM PC and its immediate IBM successors (the PC-XT and later IBM PC-AT models), since many popular DOS programs bypassed the operating system to access hardware directly for speed, requiring other manufactures to closely copy the IBM design, including its limitations. The availability of MS-DOS had two major effects on the computer industry: the commercial acceptability of "sneaky tricks" (as documented, for example, in Ralf Brown's Interrupt List) to gain speed or functionality or copy-protection, and a market that demanded extreme compatibility (speed and cosmetic differences were the only acceptable innovations).
IBM PC compatibles could also run Microsoft Xenix, a UNIX-like operating system from the early 1980s. Xenix was heavily marketed by Microsoft as a multi-user alternative to its single user MS-DOS operating system. The CPUs of these personal computers could not facilitate kernel memory protection or provide dual mode operation, so Xenix relied on cooperative multitasking and had no protected memory.
The 80286-based IBM PC AT was the first IBM compatible personal computer capable of providing protected memory mode operation. However, this mode was hampered by software bugs in its implementation on the 286, and not widely accepted until the release of the Intel 80386. With the 386 porting BSD Unix to a PC became feasible, and various Unix-like systems (tagged "*nix" at the time), including Linux, arose, but IBM (and, initially, Microsoft) opted for OS/2 from the inception of the PS/2; Microsoft eventually going its own way with Microsoft Windows firstly as a GUI on top of DOS, then as a complete operating system.
Classic Mac OS, and Microsoft Windows 1.0-3.11 supported only cooperative multitasking (Windows 95, 98, & ME supported preemptive multitasking only when running 32-bit applications, but ran legacy 16-bit applications using cooperative multitasking), and were very limited in their abilities to take advantage of protected memory. Application programs running on these operating systems must yield CPU time to the scheduler when they are not using it, either by default, or by calling a function.
Windows NT's underlying operating system kernel which was designed by essentially the same team as Digital Equipment Corporation's VMS, a UNIX-like operating system which provided protected mode operation for all user programs, kernel memory protection, preemptive multi-tasking, virtual file system support, and a host of other features.
Classic AmigaOS and versions of Microsoft Windows from Windows 1.0 through Windows Me did not properly track resources allocated by processes at runtime.[citation needed] If a process had to be terminated, the resources might not be freed up for new programs until the machine was restarted.
The AmigaOS did have preemptive multitasking, as did operating systems for many larger ("supermini") computers that, despite being technically better, were struggling in sales when faced with the mass production of increasingly-faster "Personal" Computers and customers locked into non-portable software (legacy software and proprietary office documents).

[edit] Mainframes
Through the 1950s, many major features were pioneered in the field of operating systems. The development of the IBM System/360 produced a family of mainframe computers available in widely differing capacities and price points, for which a single operating system OS/360 was planned (rather than developing ad-hoc programs for every individual model). This concept of a single OS spanning an entire product line was crucial for the success of System/360 and, in fact, IBM`s current mainframe operating systems are distant descendants of this original system; applications written for the OS/360 can still be run on modern machines. In the mid-70's, the MVS, the descendant of OS/360 offered the first[citation needed] implementation of using RAM as a transparent cache for disk resident data.
OS/360 also pioneered a number of concepts that, in some cases, are still not seen outside of the mainframe arena. For instance, in OS/360, when a program is started, the operating system keeps track of all of the system resources that are used including storage, locks, data files, and so on. When the process is terminated for any reason, all of these resources are re-claimed by the operating system. An alternative CP-67 system started a whole line of operating systems focused on the concept of virtual machines.
Control Data Corporation developed the SCOPE operating system in the 1960s, for batch processing. In cooperation with the University of Minnesota, the KRONOS and later the NOS operating systems were developed during the 1970s, which supported simultaneous batch and timesharing use. Like many commercial timesharing systems, its interface was an extension of the Dartmouth BASIC operating systems, one of the pioneering efforts in timesharing and programming languages. In the late 1970s, Control Data and the University of Illinois developed the PLATO operating system, which used plasma panel displays and long-distance time sharing networks. Plato was remarkably innovative for its time, featuring real-time chat, and multi-user graphical games. Burroughs Corporation introduced the B5000 in 1961 with the MCP, (Master Control Program) operating system. The B5000 was a stack machine designed to exclusively support high-level languages with no machine language or assembler, and indeed the MCP was the first OS to be written exclusively in a high-level language – ESPOL, a dialect of ALGOL. MCP also introduced many other ground-breaking innovations, such as being the first commercial implementation of virtual memory. During development of the AS400, IBM made an approach to Burroughs to licence MCP to run on the AS400 hardware. This proposal was declined by Burroughs management to protect its existing hardware production. MCP is still in use today in the Unisys ClearPath/MCP line of computers.
UNIVAC, the first commercial computer manufacturer, produced a series of EXEC operating systems. Like all early main-frame systems, this was a batch-oriented system that managed magnetic drums, disks, card readers and line printers. In the 1970s, UNIVAC produced the Real-Time Basic (RTB) system to support large-scale time sharing, also patterned after the Dartmouth BASIC system.
General Electric and MIT developed General Electric Comprehensive Operating Supervisor (GECOS), which introduced the concept of ringed security privilege levels. After acquisition by Honeywell it was renamed to General Comprehensive Operating System (GCOS).
Digital Equipment Corporation developed many operating systems for its various computer lines, including TOPS-10 and TOPS-20 time sharing systems for the 36-bit PDP-10 class systems. Prior to the widespread use of UNIX, TOPS-10 was a particularly popular system in universities, and in the early ARPANET community.
In the late 1960s through the late 1970s, several hardware capabilities evolved that allowed similar or ported software to run on more than one system. Early systems had utilized microprogramming to implement features on their systems in order to permit different underlying architecture to appear to be the same as others in a series. In fact most 360's after the 360/40 (except the 360/165 and 360/168) were microprogrammed implementations. But soon other means of achieving application compatibility were proven to be more significant.
The enormous investment in software for these systems made since 1960s caused most of the original computer manufacturers to continue to develop compatible operating systems along with the hardware. The notable supported mainframe operating systems include:
Burroughs MCP – B5000,1961 to Unisys Clearpath/MCP, present.
IBM OS/360 – IBM System/360, 1966 to IBM z/OS, present.
IBM CP-67 – IBM System/360, 1967 to IBM z/VM, present.
UNIVAC EXEC 8 – UNIVAC 1108, 1967, to OS 2200 Unisys Clearpath Dorado, present.

Microcomputers
The first microcomputers did not have the capacity or need for the elaborate operating systems that had been developed for mainframes and minis; minimalistic operating systems were developed, often loaded from ROM and known as Monitors. One notable early disk-based operating system was CP/M, which was supported on many early microcomputers and was closely imitated in MS-DOS, which became wildly popular as the operating system chosen for the IBM PC (IBM's version of it was called IBM DOS or PC DOS), its successors making Microsoft. In the 80's Apple Computer Inc. (now Apple Inc.) abandoned its popular Apple II series of microcomputers to introduce the Apple Macintosh computer with an innovative Graphical User Interface (GUI) to the Mac OS.
The introduction of the Intel 80386 CPU chip with 32-bit architecture and paging capabilities, provided personal computers with the ability to run multitasking operating systems like those of earlier minicomputers and mainframes. Microsoft responded to this progress by hiring Dave Cutler, who had developed the VMS operating system for Digital Equipment Corporation. He would lead the development of the Windows NT operating system, which continues to serve as the basis for Microsoft's operating systems line. Steve Jobs, a co-founder of Apple Inc., started NeXT Computer Inc., which developed the Unix-like NEXTSTEP operating system. NEXTSTEP would later be acquired by Apple Inc. and used, along with code from FreeBSD as the core of Mac OS X.
Minix, an academic teaching tool which could be run on early PCs, would inspire another reimplementation of Unix, called Linux. Started by computer science student Linus Torvalds with cooperation from volunteers over the internet, a operating system was developed with the tools from the GNU Project. The Berkeley Software Distribution, known as BSD, is the UNIX derivative distributed by the University of California, Berkeley, starting in the 1970s. Freely distributed and ported to many minicomputers, it eventually also gained a following for use on PCs, mainly as FreeBSD, NetBSD and OpenBSD.

Features

[edit] Program execution
Main article: Process (computing)
The operating system acts as an interface between an application and the hardware. The user interacts with the hardware from "the other side". The operating system is a set of services which simplifies development of applications. Executing a program involves the creation of a process by the operating system. The kernel creates a process by assigning memory and other resources, establishing a priority for the process (in multi-tasking systems), loading program code into memory, and executing the program. The program then interacts with the user and/or other devices and performs its intended function.

[edit] Interrupts
Main article: interrupt
Interrupts are central to operating systems, since they provide an efficient way for the operating system to interact with and react to its environment. The alternative--having the operating system "watch" the various sources of input for events (polling) that require action—is a poor use of CPU resources. Interrupt-based programming is directly supported by most CPUs. Interrupts provide a computer with a way of automatically running specific code in response to events. Even very basic computers support hardware interrupts, and allow the programmer to specify code which may be run when that event takes place.
When an interrupt is received, the computer's hardware automatically suspends whatever program is currently running, saves its status, and runs computer code previously associated with the interrupt; this is analogous to placing a bookmark in a book in response to a phone call. In modern operating systems, interrupts are handled by the operating system's kernel. Interrupts may come from either the computer's hardware or from the running program.
When a hardware device triggers an interrupt the operating system's kernel decides how to deal with this event, generally by running some processing code. How much code gets run depends on the priority of the interrupt (for example: a person usually responds to a smoke detector alarm before answering the phone). The processing of hardware interrupts is a task that is usually delegated to software called device drivers, which may be either part of the operating system's kernel, part of another program, or both. Device drivers may then relay information to a running program by various means.
A program may also trigger an interrupt to the operating system. If a program wishes to access hardware for example, it may interrupt the operating system's kernel, which causes control to be passed back to the kernel. The kernel will then process the request. If a program wishes additional resources (or wishes to shed resources) such as memory, it will trigger an interrupt to get the kernel's attention.

Protected mode and supervisor mode
Main article: Protected mode
Main article: Supervisor mode
Modern CPUs support something called dual mode operation. CPUs with this capability use two modes: protected mode and supervisor mode, which allow certain CPU functions to be controlled and affected only by the operating system kernel. Here, protected mode does not refer specifically to the 80286 (Intel's x86 16-bit microprocessor) CPU feature, although its protected mode is very similar to it. CPUs might have other modes similar to 80286 protected mode as well, such as the virtual 8086 mode of the 80386 (Intel's x86 32-bit microprocessor or i386).
However, the term is used here more generally in operating system theory to refer to all modes which limit the capabilities of programs running in that mode, providing things like virtual memory addressing and limiting access to hardware in a manner determined by a program running in supervisor mode. Similar modes have existed in supercomputers, minicomputers, and mainframes as they are essential to fully supporting UNIX-like multi-user operating systems.
When a computer first starts up, it is automatically running in supervisor mode. The first few programs to run on the computer, being the BIOS, bootloader and the operating system have unlimited access to hardware - and this is required because, by definition, initializing a protected environment can only be done outside of one. However, when the operating system passes control to another program, it can place the CPU into protected mode.
In protected mode, programs may have access to a more limited set of the CPU's instructions. A user program may leave protected mode only by triggering an interrupt, causing control to be passed back to the kernel. In this way the operating system can maintain exclusive control over things like access to hardware and memory.
The term "protected mode resource" generally refers to one or more CPU registers, which contain information that the running program isn't allowed to alter. Attempts to alter these resources generally causes a switch to supervisor mode, where the operating system can deal with the illegal operation the program was attempting (for example, by killing the program).

Memory management
Main article: memory management
Among other things, a multiprogramming operating system kernel must be responsible for managing all system memory which is currently in use by programs. This ensures that a program does not interfere with memory already used by another program. Since programs time share, each program must have independent access to memory.
Cooperative memory management, used by many early operating systems assumes that all programs make voluntary use of the kernel's memory manager, and do not exceed their allocated memory. This system of memory management is almost never seen anymore, since programs often contain bugs which can cause them to exceed their allocated memory. If a program fails it may cause memory used by one or more other programs to be affected or overwritten. Malicious programs, or viruses may purposefully alter another program's memory or may affect the operation of the operating system itself. With cooperative memory management it takes only one misbehaved program to crash the system.
Memory protection enables the kernel to limit a process' access to the computer's memory. Various methods of memory protection exist, including memory segmentation and paging. All methods require some level of hardware support (such as the 80286 MMU) which doesn't exist in all computers.
In both segmentation and paging, certain protected mode registers specify to the CPU what memory address it should allow a running program to access. Attempts to access other addresses will trigger an interrupt which will cause the CPU to re-enter supervisor mode, placing the kernel in charge. This is called a segmentation violation or Seg-V for short, and since it is both difficult to assign a meaningful result to such an operation, and because it is usually a sign of a misbehaving program, the kernel will generally resort to terminating the offending program, and will report the error.
Windows 3.1-Me had some level of memory protection, but programs could easily circumvent the need to use it. Under Windows 9x all MS-DOS applications ran in supervisor mode, giving them almost unlimited control over the computer. A general protection fault would be produced indicating a segmentation violation had occurred, however the system would often crash anyway.
In most Linux systems, part of the hard disk is reserved for virtual memory when the Operating system is being installed on the system. This part is known as swap space. Windows systems use a swap file instead of a partition.

Virtual memory
The use of virtual memory addressing (such as paging or segmentation) means that the kernel can choose what memory each program may use at any given time, allowing the operating system to use the same memory locations for multiple tasks.
If a program tries to access memory that isn't in its current range of accessible memory, but nonetheless has been allocated to it, the kernel will be interrupted in the same way as it would if the program were to exceed its allocated memory. (See section on memory management.) Under UNIX this kind of interrupt is referred to as a page fault.
When the kernel detects a page fault it will generally adjust the virtual memory range of the program which triggered it, granting it access to the memory requested. This gives the kernel discretionary power over where a particular application's memory is stored, or even whether or not it has actually been allocated yet.
In modern operating systems, application memory which is accessed less frequently can be temporarily stored on disk or other media to make that space available for use by other programs. This is called swapping, as an area of memory can be used by multiple programs, and what that memory area contains can be swapped or exchanged on demand.
Further information: Page fault

[edit] Multitasking
Main article: Computer multitasking
Main article: Process management (computing)
Multitasking refers to the running of multiple independent computer programs on the same computer, giving the appearance that it is performing the tasks at the same time. Since most computers can do at most one or two things at one time, this is generally done via time sharing, which means that each program uses a share of the computer's time to execute.
An operating system kernel contains a piece of software called a scheduler which determines how much time each program will spend executing, and in which order execution control should be passed to programs. Control is passed to a process by the kernel, which allows the program access to the CPU and memory. At a later time control is returned to the kernel through some mechanism, so that another program may be allowed to use the CPU. This so-called passing of control between the kernel and applications is called a context switch.
An early model which governed the allocation of time to programs was called cooperative multitasking. In this model, when control is passed to a program by the kernel, it may execute for as long as it wants before explicitly returning control to the kernel. This means that a malicious or malfunctioning program may not only prevent any other programs from using the CPU, but it can hang the entire system if it enters an infinite loop.
The philosophy governing preemptive multitasking is that of ensuring that all programs are given regular time on the CPU. This implies that all programs must be limited in how much time they are allowed to spend on the CPU without being interrupted. To accomplish this, modern operating system kernels make use of a timed interrupt. A protected mode timer is set by the kernel which triggers a return to supervisor mode after the specified time has elapsed. (See above sections on Interrupts and Dual Mode Operation.)
On many single user operating systems cooperative multitasking is perfectly adequate, as home computers generally run a small number of well tested programs. Windows NT was the first version of Microsoft Windows which enforced preemptive multitasking, but it didn't reach the home user market until Windows XP, (since Windows NT was targeted at professionals.)
Further information: Context switch
Further information: Preemptive multitasking
Further information: Cooperative multitasking

Kernel Preemption
In recent years concerns have arisen because of long latencies often associated with some kernel run-times, sometimes on the order of 100ms or more in systems with monolithic kernels. These latencies often produce noticeable slowness in desktop systems, and can prevent operating systems from performing time-sensitive operations such as audio recording and some communications.
Modern operating systems extend the concepts of application preemption to device drivers and kernel code, so that the operating system has preemptive control over internal run-times as well. Under Windows Vista, the introduction of the Windows Display Driver Model (WDDM) accomplishes this for display drivers, and in Linux, the preemptable kernel model introduced in version 2.6 allows all device drivers and some other parts of kernel code to take advantage of preemptive multi-tasking.
Under Windows prior to Windows Vista and Linux prior to version 2.6 all driver execution was co-operative, meaning that if a driver entered an infinite loop it would freeze the system.

Disk access and file systems
Main article: Virtual file system
Access to data stored on disks is a central feature of all operating systems. Computers store data on disks using files, which are structured in specific ways in order to allow for faster access, higher reliability, and to make better use out of the drive's available space. The specific way in which files are stored on a disk is called a file system, and enables files to have names and attributes. It also allows them to be stored in a hierarchy of directories or folders arranged in a directory tree.
Early operating systems generally supported a single type of disk drive and only one kind of file system. Early file systems were limited in their capacity, speed, and in the kinds of file names and directory structures they could use. These limitations often reflected limitations in the operating systems they were designed for, making it very difficult for an operating system to support more than one file system.
While many simpler operating systems support a limited range of options for accessing storage systems, operating systems like UNIX and Linux support a technology known as a virtual file system or VFS. An operating system like UNIX supports a wide array of storage devices, regardless of their design or file systems to be accessed through a common application programming interface (API). This makes it unnecessary for programs to have any knowledge about the device they are accessing. A VFS allows the operating system to provide programs with access to an unlimited number of devices with an infinite variety of file systems installed on them through the use of specific device drivers and file system drivers.
A connected storage device such as a hard drive is accessed through a device driver. The device driver understands the specific language of the drive and is able to translate that language into a standard language used by the operating system to access all disk drives. On UNIX this is the language of block devices.
When the kernel has an appropriate device driver in place, it can then access the contents of the disk drive in raw format, which may contain one or more file systems. A file system driver is used to translate the commands used to access each specific file system into a standard set of commands that the operating system can use to talk to all file systems. Programs can then deal with these file systems on the basis of filenames, and directories/folders, contained within a hierarchical structure. They can create, delete, open, and close files, as well as gather various information about them, including access permissions, size, free space, and creation and modification dates.
Various differences between file systems make supporting all file systems difficult. Allowed characters in file names, case sensitivity, and the presence of various kinds of file attributes makes the implementation of a single interface for every file system a daunting task. Operating systems tend to recommend the use of (and so support natively) file systems specifically designed for them; for example, NTFS in Windows and ext3 and ReiserFS in Linux. However, in practice, third party drives are usually available to give support for the most widely used filesystems in most general-purpose operating systems (for example, NTFS is available in Linux through NTFS-3g, and ext2/3 and ReiserFS are available in Windows through FS-driver and rfstool).

Device drivers
Main article: Device driver
A device driver is a specific type of computer software developed to allow interaction with hardware devices. Typically this constitutes an interface for communicating with the device, through the specific computer bus or communications subsystem that the hardware is connected to, providing commands to and/or receiving data from the device, and on the other end, the requisite interfaces to the operating system and software applications. It is a specialized hardware-dependent computer program which is also operating system specific that enables another program, typically an operating system or applications software package or computer program running under the operating system kernel, to interact transparently with a hardware device, and usually provides the requisite interrupt handling necessary for any necessary asynchronous time-dependent hardware interfacing needs.
The key design goal of device drivers is abstraction. Every model of hardware (even within the same class of device) is different. Newer models also are released by manufacturers that provide more reliable or better performance and these newer models are often controlled differently. Computers and their operating systems cannot be expected to know how to control every device, both now and in the future. To solve this problem, OSes essentially dictate how every type of device should be controlled. The function of the device driver is then to translate these OS mandated function calls into device specific calls. In theory a new device, which is controlled in a new manner, should function correctly if a suitable driver is available. This new driver will ensure that the device appears to operate as usual from the operating systems' point of view.

Networking
Main article: Computer network
Currently most operating systems support a variety of networking protocols, hardware, and applications for using them. This means that computers running dissimilar operating systems can participate in a common network for sharing resources such as computing, files, printers, and scanners using either wired or wireless connections. Networks can essentially allow a computer's operating system to access the resources of a remote computer to support the same functions as it could if those resources were connected directly to the local computer. This includes everything from simple communication, to using networked file systems or even sharing another computer's graphics or sound hardware. Some network services allow the resources of a computer to be accessed transparently, such as SSH which allows networked users direct access to a computer's command line interface.
Client/server networking involves a program on a computer somewhere which connects via a network to another computer, called a server. Servers, usually running UNIX or Linux, offer (or host) various services to other network computers and users. These services are usually provided through ports or numbered access points beyond the server's network address. Each port number is usually associated with a maximum of one running program, which is responsible for handling requests to that port. A daemon, being a user program, can in turn access the local hardware resources of that computer by passing requests to the operating system kernel.
Many operating systems support one or more vendor-specific or open networking protocols as well, for example, SNA on IBM systems, DECnet on systems from Digital Equipment Corporation, and Microsoft-specific protocols (SMB) on Windows. Specific protocols for specific tasks may also be supported such as NFS for file access. Protocols like ESound, or esd can be easily extended over the network to provide sound from local applications, on a remote system's sound hardware.

Security
Main article: Computer security
A computer being secure depends on a number of technologies working properly. A modern operating system provides access to a number of resources, which are available to software running on the system, and to external devices like networks via the kernel.
The operating system must be capable of distinguishing between requests which should be allowed to be processed, and others which should not be processed. While some systems may simply distinguish between "privileged" and "non-privileged", systems commonly have a form of requester identity, such as a user name. To establish identity there may be a process of authentication. Often a username must be quoted, and each username may have a password. Other methods of authentication, such as magnetic cards or biometric data, might be used instead. In some cases, especially connections from the network, resources may be accessed with no authentication at all (such as reading files over a network share). Also covered by the concept of requester identity is authorization; the particular services and resources accessible by the requester once logged into a system and tied to either the requester's user account or to the variously configured groups of users to which the requester belongs.
In addition to the allow/disallow model of security, a system with a high level of security will also offer auditing options. These would allow tracking of requests for access to resources (such as, "who has been reading this file?"). Internal security, or security from an already running program is only possible if all possibly harmful requests must be carried out through interrupts to the operating system kernel. If programs can directly access hardware and resources, they cannot be secured.
External security involves a request from outside the computer, such as a login at a connected console or some kind of network connection. External requests are often passed through device drivers to the operating system's kernel, where they can be passed onto applications, or carried out directly. Security of operating systems has long been a concern because of highly sensitive data held on computers, both of a commercial and military nature. The United States Government Department of Defense (DoD) created the Trusted Computer System Evaluation Criteria (TCSEC) which is a standard that sets basic requirements for assessing the effectiveness of security. This became of vital importance to operating system makers, because the TCSEC was used to evaluate, classify and select computer systems being considered for the processing, storage and retrieval of sensitive or classified information.
Network services include offerings such as file sharing, print services, email, web sites, and file transfer protocols (FTP), most of which can have compromised security. At the front line of security are hardware devices known as firewalls or intrusion detection/prevention systems. At the operating system level, there are a number of software firewalls available, as well as intrusion detection/prevention systems. Most modern operating systems include a software firewall, which is enabled by default. A software firewall can be configured to allow or deny network traffic to or from a service or application running on the operating system. Therefore, one can install and be running an insecure service, such as Telnet or FTP, and not have to be threatened by a security breach because the firewall would deny all traffic trying to connect to the service on that port.
An alternative strategy, and the only sandbox strategy available in systems that do not meet the Popek and Goldberg virtualization requirements, is the operating system not running user programs as native code, but instead either emulates a processor or provides a host for a p-code based system such as Java.
Internal security is especially relevant for multi-user systems; it allows each user of the system to have private files that the other users cannot tamper with or read. Internal security is also vital if auditing is to be of any use, since a program can potentially bypass the operating system, inclusive of bypassing auditing.

Example: Microsoft Windows
While the Windows 9x series offered the option of having profiles for multiple users, they had no concept of access privileges, and did not allow concurrent access; and so were not true multi-user operating systems. In addition, they implemented only partial memory protection. They were accordingly widely criticised for lack of security.
The Windows NT series of operating systems, by contrast, are true multi-user, and implement absolute memory protection. However, a lot of the advantages of being a true multi-user operating system were nullified by the fact that, prior to Windows Vista, the first user account created during the setup process was an administrator account, which was also the default for new accounts. Though Windows XP did have limited accounts, the majority of home users did not change to an account type with fewer rights – partially due to the number of programs which unnecessarily required administrator rights – and so most home users ran as administrator all the time.
Windows Vista changes this by introducing a privilege elevation system called User Account Control. When logging in as a standard user, a logon session is created and a token containing only the most basic privileges is assigned. In this way, the new logon session is incapable of making changes that would affect the entire system. When logging in as a user in the Administrators group, two separate tokens are assigned. The first token contains all privileges typically awarded to an administrator, and the second is a restricted token similar to what a standard user would receive. User applications, including the Windows Shell, are then started with the restricted token, resulting in a reduced privilege environment even under an Administrator account. When an application requests higher privileges or "Run as administrator" is clicked, UAC will prompt for confirmation and, if consent is given (including administrator credentials if the account requesting the elevation is not a member of the administrators group), start the process using the unrestricted token.

Example: Linux/Unix
Linux and UNIX both have two tier security, which limits any system-wide changes to the root user, a special user account on all UNIX-like systems. While the root user has virtually unlimited permission to effect system changes, programs running as a regular user are limited in where they can save files, what hardware they can access, etc. In many systems, a user's memory usage, their selection of available programs, their total disk usage or quota, available range of programs' priority settings, and other functions can also be locked down. This provides the user with plenty of freedom to do what needs to be done, without being able to put any part of the system in jeopardy (barring accidental triggering of system-level bugs) or make sweeping, system-wide changes. The user's settings are stored in an area of the computer's file system called the user's home directory, which is also provided as a location where the user may store their work, a concept later adopted by Windows as the 'My Documents' folder. Should a user have to install software outside of his home directory or make system-wide changes, they must become the root user temporarily, usually with the su or sudo command, which is answered with the computer's root password when prompted. Some systems (such as Ubuntu and its derivatives) are configured by default to allow select users to run programs as the root user via the sudo command, using the user's own password for authentication instead of the system's root password. One is sometimes said to "go root" or "drop to root" when elevating oneself to root access.
For more information on the differences between the Linux su/sudo approach and Vista's User Account Control, see Comparison of privilege authorization features.

[edit] File system support in modern operating systems
Support for file systems is highly varied among modern operating systems although there are several common file systems which almost all operating systems include support and drivers for.

Solaris
The Solaris Operating System (as with most operating systems based upon open standards and/or open source) uses UFS as its primary file system. Prior to 1998, Solaris UFS did not have logging/journaling capabilities, but over time the OS has gained this and other new data management capabilities.
Additional features include Veritas (Journaling) VxFS, QFS from Sun Microsystems, enhancements to UFS including multiterabyte support and UFS volume management included as part of the OS, and ZFS (open source, poolable, 128-bit, compressible, and error-correcting).
Kernel extensions were added to Solaris to allow for bootable Veritas VxFS operation. Logging or journaling was added to UFS in Solaris 7. Releases of Solaris 10, Solaris Express, OpenSolaris, and other open source variants of Solaris later supported bootable ZFS.
Logical Volume Management allows for spanning a file system across multiple devices for the purpose of adding redundancy, capacity, and/or throughput. Solaris includes Solaris Volume Manager (formerly known as Solstice DiskSuite.) Solaris is one of many operating systems supported by Veritas Volume Manager. Modern Solaris based operating systems eclipse the need for volume management through leveraging virtual storage pools in ZFS.

Linux
Many Linux distributions support some or all of ext2, ext3, ext4, ReiserFS, Reiser4, JFS , XFS , GFS, GFS2, OCFS, OCFS2, and NILFS. The ext file systems, namely ext2, ext3 and ext4 are based on the original Linux file system. Others have been developed by companies to meet their specific needs, hobbyists, or adapted from UNIX, Microsoft Windows, and other operating systems. Linux has full support for XFS and JFS, along with FAT (the MS-DOS file system), and HFS which is the primary file system for the Macintosh.
In recent years support for Microsoft Windows NT's NTFS file system has appeared in Linux, and is now comparable to the support available for other native UNIX file systems. ISO 9660 and Universal Disk Format (UDF) are supported which are standard file systems used on CDs, DVDs, and BluRay discs. It is possible to install Linux on the majority of these file systems. Unlike other operating systems, Linux and UNIX allow any file system to be used regardless of the media it is stored in, whether it is a hard drive, a disc (CD,DVD...), an USB key, or even contained within a file located on another file system. Microsoft Windows
Microsoft Windows currently supports NTFS and FAT file systems, along with network file systems shared from other computers, and the ISO 9660 and UDF filesystems used for CDs, DVDs, and other optical discs such as Blu-ray. Under Windows each file system is usually limited in application to certain media, for example CDs must use ISO 9660 or UDF, and as of Windows Vista, NTFS is the only file system which the operating system can be installed on. Windows Embedded CE 6.0, Windows Vista Service Pack 1, and Windows Server 2008 support ExFAT, a file system more suitable for flash drives.

Mac OS X
Mac OS X supports HFS+ with journaling as its primary file system. It is derived from the Hierarchical File System of the earlier Mac OS. Mac OS X has facilities to read and write FAT, NTFS (only read, although an open-source cross platform implementation known as NTFS 3G provides read-write support to Microsoft Windows NTFS file system for Mac OS X users.), UDF, and other file systems, but cannot be installed to them. Due to its UNIX heritage Mac OS X now supports virtually all the file systems supported by the UNIX VFS. Recently Apple Inc. started work on porting Sun Microsystem's ZFS filesystem to Mac OS X and preliminary support is already available in Mac OS X 10.5 and support blu-ray disc.

Special-purpose file systems
FAT file systems are commonly found on floppy disks, flash memory cards, digital cameras, and many other portable devices because of their relative simplicity. Performance of FAT compares poorly to most other file systems as it uses overly simplistic data structures, making file operations time-consuming, and makes poor use of disk space in situations where many small files are present. ISO 9660 and Universal Disk Format are two common formats that target Compact Discs and DVDs. Mount Rainier is a newer extension to UDF supported by Linux 2.6 series and Windows Vista that facilitates rewriting to DVDs in the same fashion as has been possible with floppy disks.

Journalized file systems
File systems may provide journaling, which provides safe recovery in the event of a system crash. A journaled file system writes some information twice: first to the journal, which is a log of file system operations, then to its proper place in the ordinary file system. Journaling is handled by the file system driver, and keeps track of each operation taking place that changes the contents of the disk. In the event of a crash, the system can recover to a consistent state by replaying a portion of the journal. Many UNIX file systems provide journaling including ReiserFS, JFS, and Ext3.
In contrast, non-journaled file systems typically need to be examined in their entirety by a utility such as fsck or chkdsk for any inconsistencies after an unclean shutdown. Soft updates is an alternative to journaling that avoids the redundant writes by carefully ordering the update operations. Log-structured file systems and ZFS also differ from traditional journaled file systems in that they avoid inconsistencies by always writing new copies of the data, eschewing in-place updates.

Graphical user interfaces
Most modern computer systems support graphical user interfaces (GUI), and often include them. In some computer systems, such as the original implementations of Microsoft Windows and the Mac OS, the GUI is integrated into the kernel.
While technically a graphical user interface is not an operating system service, incorporating support for one into the operating system kernel can allow the GUI to be more responsive by reducing the number of context switches required for the GUI to perform its output functions. Other operating systems are modular, separating the graphics subsystem from the kernel and the Operating System. In the 1980s UNIX, VMS and many others had operating systems that were built this way. Linux and Mac OS X are also built this way. Modern releases of Microsoft Windows such as Windows Vista implement a graphics subsystem that is mostly in user-space, however versions between Windows NT 4.0 and Windows Server 2003's graphics drawing routines exist mostly in kernel space. Windows 9x had very little distinction between the interface and the kernel.
Many computer operating systems allow the user to install or create any user interface they desire. The X Window System in conjunction with GNOME or KDE is a commonly-found setup on most Unix and Unix-like (BSD, Linux, Minix) systems. A number of Windows shell replacements have been released for Microsoft Windows, which offer alternatives to the included Windows shell, but the shell itself cannot be separated from Windows.
Numerous Unix-based GUIs have existed over time, most derived from X11. Competition among the various vendors of Unix (HP, IBM, Sun) led to much fragmentation, though an effort to standardize in the 1990s to COSE and CDE failed for the most part due to various reasons, eventually eclipsed by the widespread adoption of GNOME and KDE. Prior to open source-based toolkits and desktop environments, Motif was the prevalent toolkit/desktop combination (and was the basis upon which CDE was developed).
Graphical user interfaces evolve over time. For example, Windows has modified its user interface almost every time a new major version of Windows is released, and the Mac OS GUI changed dramatically with the introduction of Mac OS X in 1999.

Examples of operating systems

This article or section contains information about scheduled or expected future software. Information about it may change as the software release approaches and more information becomes available.


Microsoft Windows

Windows Vista is the latest stable Windows operating system.
Microsoft Windows is a family of proprietary operating systems that originated as an add-on to the older MS-DOS operating system for the IBM PC. Modern versions are based on the newer Windows NT kernel that was originally intended for OS/2. Windows runs on x86, x86-64 and Itanium processors. Earlier versions also ran on the DEC Alpha, MIPS, Fairchild (later Intergraph) Clipper and PowerPC architectures (some work was done to port it to the SPARC architecture).
As of June 2008, Microsoft Windows holds a large amount of the worldwide desktop market share. Windows is also used on servers, supporting applications such as web servers and database servers. In recent years, Microsoft has spent significant marketing and research & development money to demonstrate that Windows is capable of running any enterprise application, which has resulted in consistent price/performance records (see the TPC) and significant acceptance in the enterprise market.
The most widely used version of the Microsoft Windows family is Windows XP, released on October 25, 2001.
In November 2006, after more than five years of development work, Microsoft released Windows Vista, a major new operating system version of Microsoft Windows family which contains a large number of new features and architectural changes. Chief amongst these are a new user interface and visual style called Windows Aero, a number of new security features such as User Account Control, and a few new multimedia applications such as Windows DVD Maker. A server variant based on the same kernel, Windows Server 2008, was released in early 2008.
Windows 7 is currently under development; Microsoft has stated that it intends to scope its development to a three-year timeline; it is to be released on October 22, 2009.

Unix and Unix-like operating systems

Debian is a (linux-based) unix-like system
Ken Thompson wrote B, mainly based on BCPL, which he used to write Unix, based on his experience in the MULTICS project. B was replaced by C, and Unix developed into a large, complex family of inter-related operating systems which have been influential in every modern operating system (see History). The Unix-like family is a diverse group of operating systems, with several major sub-categories including System V, BSD, and Linux. The name "UNIX" is a trademark of The Open Group which licenses it for use with any operating system that has been shown to conform to their definitions. "Unix-like" is commonly used to refer to the large set of operating systems which resemble the original Unix.
Unix-like systems run on a wide variety of machine architectures. They are used heavily for servers in business, as well as workstations in academic and engineering environments. Free Unix variants, such as GNU, Linux and BSD, are popular in these areas.
Some Unix variants like HP's HP-UX and IBM's AIX are designed to run only on that vendor's hardware. Others, such as Solaris, can run on multiple types of hardware, including x86 servers and PCs. Apple's Mac OS X, a hybrid kernel-based BSD variant derived from NeXTSTEP, Mach, and FreeBSD, has replaced Apple's earlier (non-Unix) Mac OS.
Unix interoperability was sought by establishing the POSIX standard. The POSIX standard can be applied to any operating system, although it was originally created for various Unix variants.

Mac OS X

Mac OS X "Leopard"
Mac OS X is a line of proprietary, graphical operating systems developed, marketed, and sold by Apple Inc., the latest of which is pre-loaded on all currently shipping Macintosh computers. Mac OS X is the successor to the original Mac OS, which had been Apple's primary operating system since 1984. Unlike its predecessor, Mac OS X is a UNIX operating system built on technology that had been developed at NeXT through the second half of the 1980s and up until Apple purchased the company in early 1997.
The operating system was first released in 1999 as Mac OS X Server 1.0, with a desktop-oriented version (Mac OS X v10.0) following in March 2001. Since then, five more distinct "end-user" and "server" editions of Mac OS X have been released, the most recent being Mac OS X v10.5, which was first made available in October 2007. Releases of Mac OS X are named after big cats; Mac OS X v10.5 is also called "Leopard". The next version of OS X, named "Snow Leopard" will be released in September 2009.
The server edition, Mac OS X Server, is architecturally identical to its desktop counterpart but usually runs on Apple's line of Macintosh server hardware. Mac OS X Server includes work group management and administration software tools that provide simplified access to key network services, including a mail transfer agent, a Samba server, an LDAP server, a domain name server, and others.

Plan 9
Ken Thompson, Dennis Ritchie and Douglas McIlroy at Bell Labs designed and developed the C programming language to build the operating system Unix. Programmers at Bell Labs went on to develop Plan 9 and Inferno, which were engineered for modern distributed environments. Plan 9 was designed from the start to be a networked operating system, and had graphics built-in, unlike Unix, which added these features to the design later. Plan 9 has yet to become as popular as Unix derivatives, but it has an expanding community of developers. It is currently released under the Lucent Public License. Inferno was sold to Vita Nuova Holdings and has been released under a GPL/MIT license.

Real-time operating systems
Main article: real-time operating system
A real-time operating system (RTOS1) is a multitasking operating system intended for applications with fixed deadlines (real-time computing). Such applications include some small embedded systems, automobile engine controllers, industrial robots, spacecraft, industrial control, and some large-scale computing systems.
An early example of a large-scale real-time operating system was Transaction Processing Facility developed by American Airlines and IBM for the Sabre Airline Reservations System.

Embedded systems
Main article: list of operating systems#Microcontroller, Real-time
Embedded systems use a variety of dedicated operating systems. In some cases, the "operating system" software is directly linked to the application to produce a monolithic special-purpose program. In the simplest embedded systems, there is no distinction between the OS and the application.
Embedded systems that have fixed deadlines use a real-time operating system such as VxWorks, eCos, QNX, MontaVista Linux and RTLinux.
Some embedded systems use operating systems such as Symbian OS, Palm OS, Windows CE, BSD, and Linux, although such operating systems do not support real-time computing.
Windows CE shares similar APIs to desktop Windows but shares none of desktop Windows' codebase[citation needed].

Hobby development
Operating system development, or OSDev for short, as a hobby has a large cult-like following. As such, operating systems, such as Linux, have derived from hobby operating system projects. The design and implementation of an operating system requires skill and determination, and the term can cover anything from a basic "Hello World" boot loader to a fully featured kernel. One classical example of this is the Minix Operating System—an OS that was designed by A.S. Tanenbaum as a teaching tool but was heavily used by hobbyists before Linux eclipsed it in popularity.

Other
Older operating systems which are still used in niche markets include OS/2 from IBM and Microsoft; Mac OS, the non-Unix precursor to Apple's Mac OS X; BeOS; XTS-300. Some, most notably AmigaOS 4 and RISC OS, continue to be developed as minority platforms for enthusiast communities and specialist applications. OpenVMS formerly from DEC, is still under active development by Hewlett-Packard. There were a number of operating systems for 8 bit computers - Apple's DOS (Disk Operating System) 3.2 & 3.3 for Apple II, ProDOS, UCSD, CP/M - available for various 8 and 16 bit environments.
Research and development of new operating systems continues. GNU Hurd is designed to be backwards compatible with Unix, but with enhanced functionality and a microkernel architecture. Singularity is a project at Microsoft Research to develop an operating system with better memory protection based on the .Net managed code model. Systems development follows the same model used by other Software development, which involves maintainers, version control "trees", forks, "patches", and specifications. From the AT&T-Berkeley lawsuit the new unencumbered systems were based on 4.4BSD which forked as FreeBSD and NetBSD efforts to replace missing code after the Unix wars. Recent forks include DragonFly BSD and Darwin from BSD Unix.

Diversity of operating systems and portability
Application software is generally written for use on a specific operating system, and sometimes even for specific hardware. When porting the application to run on another OS, the functionality required by that application may be implemented differently by that OS (the names of functions, meaning of arguments, etc.) requiring the application to be adapted.
This cost in supporting operating systems diversity can be avoided by instead writing applications against software platforms like Java, Qt or for web browsers. These abstractions have already borne the cost of adaptation to specific operating systems and their system libraries.
Another approach is for operating system vendors to adopt standards. For example, POSIX and OS abstraction layers provide commonalities that reduce porting costs.

Microsoft Flight Simulator

Microsoft Flight Simulator (sometimes abbreviated to MSFS or FS) is a flight simulator program for Microsoft Windows, marketed and often seen as a video game.
One of the longest-running, best-known and most comprehensive home flight simulator series, Microsoft Flight Simulator was an early product in the Microsoft portfolio – different from its other software which was largely business-oriented – and at 25 years is its longest-running franchise, predating Windows by three years. It has been reported that Microsoft Flight Simulator may be the longest running PC game series of all time. In January 2009 it was reported that Microsoft closed down the ACES Game Studio, the design team responsible for the series.
Bruce Artwick developed the Flight Simulator program beginning in 1977 and his company, subLOGIC sold it for various personal computers. In 1982 Artwick's company licensed to Microsoft a version of Flight Simulator for the IBM PC, which was marketed as Microsoft Flight Simulator 1.00. Former Microsoft CEO Bill Gates was fascinated with Antoine de Saint-Exupéry's book Night Flight, which described the sensations of flying a small aircraft in great detail.

History
Main article: History of Microsoft Flight Simulator
Microsoft Flight Simulator began life as a set of articles on computer graphics written by Bruce Artwick in 1976 about a 3D computer graphics program. When the magazine editor said that subscribers wanted to buy the program, Bruce Artwick incorporated a company called subLOGIC Corporation in 1977 and began selling flight simulators for 8080 computers such as the Altair 8800 and IMSAI 8080. In 1979 subLOGIC released FS1 Flight Simulator for the Apple II. In 1980 subLOGIC released a version for the Tandy TRS-80, and in 1982 they licensed an IBM PC version with CGA graphics to Microsoft, which was released as Microsoft Flight Simulator 1.00. In the early days of less-than-100% IBM PC compatibles, Flight Simulator was used as an unofficial test of the degree of compatibility of a new PC clone model,[5 along with Lotus 1-2-3. subLOGIC continued to develop the product for other platforms, and their improved Flight Simulator II was ported to Apple II in 1983, to the Commodore 64, MSX and Atari 800 in 1984, and to the Commodore Amiga and Atari ST in 1986. Meanwhile, Bruce Artwick left subLOGIC to found Bruce Artwick Organisation to work on subsequent Microsoft releases, beginning with Microsoft Flight Simulator 3.0 in 1988. Microsoft Flight Simulator reached commercial maturity with version 3.1, and then went on to encompass the use of 3D graphics and graphic hardware acceleration.
Microsoft has consistently produced newer versions of the simulation, adding features such as new aircraft types and augmented scenery. The 2000 and 2002 versions, were available in a standard edition and a Professional Edition which included more aircraft, tools and more extensive scenery than the regular version. The 2004 (version 9) release marked one hundred years of powered flight, and had only one edition. Flight Simulator X, released in 2006, has returned to dual editions with a "Standard Edition" and a "Deluxe Edition".
The most recent versions of this simulation, Microsoft Flight Simulator 2004 and Microsoft Flight Simulator X, cater to pilots, would-be pilots and people who once dreamed of being pilots alike. Microsoft Flight Simulator is less a game than an immersive virtual environment; it is usually frustrating, complex and difficult to new users due to its realism, but it can be rewarding for the skilled flightsimmer at the same time. The flying area encompasses the whole world, to varying levels of detail, including over 24,000 airports. Individually-detailed scenery can be found representing major landmarks and an ever-growing number of towns and cities. Landscape details are often patchy away from population centres and particularly outside the USA, although a variety of websites offer scenery add-ons (both free and commercial) to remedy this.
The three latest versions incorporate sophisticated weather simulation, along with the ability to download real-world weather data (Beginning with Flight Simulator 2000). Also included is a varied air traffic environment with interactive Air Traffic Control (although the MSFS series was not the first to implement this[citation needed]), player-flyable aircraft ranging from the historical Douglas DC-3 to the modern Boeing 777, interactive lessons and challenges, and finally aircraft checklists. In addition, the two latest versions of Microsoft Flight Simulator have a “kiosk mode”, which allows the application to be run in kiosks. It is the wide availability of upgrades and add-ons, both free and commercial, which give the simulator its flexibility and scope.

[edit] Closure of the ACES Game Studio
On January 22, 2009, it was reported that the development team behind the franchise was being heavily affected by Microsoft's ongoing job cuts, with indications that the entire Microsoft Flight Simulator team was laid off. Microsoft confirmed the closure of the ACES studio on January 26, 2009 in a post on the official FSInsider Web site. The article, "About the Aces Team," states in part:
This difficult decision was made to align Microsoft’s resources with our strategic priorities. Microsoft Flight Simulator X will remain available at retail stores and web retailers, the Flight Sim community will continue to learn from and encourage one another, and we remain committed to the Flight Simulator franchise for the long term.
According to former ACES employee Phil Taylor, the shutdown was not due to unfavorable financial results of FSX, but due to management issues and delays in project delivery combined with increased demands in headcount, at a time that Microsoft was attempting to lower costs. It has been speculated in the mainstream and gaming media that future releases on the franchise would come as part of an Internet game or on the Xbox 360.
There is an ongoing petition for Microsoft to preserve the integrity of the Flight Simulator franchise and ensure the future of Microsoft Flight Simulator development, with more than 7,808 signatures as of August 4, 2009.
Version history
Main article: History of Microsoft Flight Simulator
1982 – Flight Simulator 1.0
1983 – Flight Simulator 2.0
1988 – Flight Simulator 3.0
1989 – Flight Simulator 4.0
1993 – Flight Simulator 5.0
1995 – Flight Simulator 5.1
1996 – Flight Simulator 95
1997 – Flight Simulator 98
1999 – Flight Simulator 2000
2001 – Flight Simulator 2002
2003 – Flight Simulator 2004: A Century of Flight
2006 – Flight Simulator X

Flight Simulator X
Main article: Microsoft Flight Simulator X
Flight Simulator X is the most recent version of Microsoft Flight Simulator. It includes a graphics engine upgrade as well as compatibility with DirectX 10 and Windows Vista technologies. It was released on 17 October, 2006 in North America. There are two versions of the game, both on two DVDs. The Deluxe edition contains the new Garmin G1000 integrated flight instrument system in three cockpits, additional aircraft in the fleet, Tower Control capability (multiplayer only), more missions, more high-detail cities and airports, and a Service Developers Kit (SDK) pack for development. Main improvements are graphical, for instance it is the first simulator with light bloom.
Microsoft has also released a Flight Simulator X Demo, which contains limited aircraft and area of flight. It is available for Windows XP SP2 and Windows Vista.

Add-ons and customization
See also Category: Microsoft Flight Simulator add-ons
The long history and consistent popularity of Flight Simulator has encouraged a very large body of add-on packages to be developed as both commercial and volunteer ventures. A formal software development kit and other tools for the simulator exist to further facilitate third-party efforts, and some third parties have also learned to "tweak" the simulator in various ways by trial and error.

Aircraft

A PMDG Beech 1900D of "American Flight Airways"; in AFA Express colors.
Individual aspects of Flight Simulator aircraft that can be edited include cockpit layout, cockpit image, aircraft model, aircraft model textures, aircraft flight characteristics, scenery models, scenery layouts, and scenery textures, often with simple-to-use programs or only a text editor such as Notepad. Dedicated flightsimmers have taken advantage of Flight Simulator's vast add-on capabilities, having successfully linked Flight Simulator to homebuilt hardware, some of which approaches the complexity of commercial full-motion flight simulators.
The game's aircraft are made up of five parts:
The model, which is a 3D CAD-style model of the aircraft's exterior and virtual cockpit, if applicable.
The textures, bitmap images which the game layers onto the model. These can be easily edited (known as repainting), so that a model can adopt any paint scheme imaginable, fictional or real.
The sounds, literally, what the aircraft sounds like. This is determined by defining which WAV files the aircraft uses as its sound set.
The panel, a representation of the aircraft's cockpit. This includes one or more bitmap images of the panel, instrument gauge files, and sometimes its own sounds.
The FDE, or Flight Dynamics Engine. This consists of the airfile, a *.air file, which contains hundreds of parameters which define the aircraft's flight characteristics, and the aircraft.cfg, which contains more, easier-to-edit parameters.

AI Traffic
A growing add-on for the series is AI (Artificially Intelligent) Traffic. AI Traffic is the simulation of other vehicles in the FS landscape. This traffic plays a real role in the simulator as it is possible to crash into traffic (this can be disabled), thus ending your session, and to interact with the traffic via the radio and ATC. This feature is possible even with 3rd party traffic. Microsoft introduced AI traffic in MSFS 2002 with several airliners and private aircraft. This has since been supplemented with many files created by third party developers. Typically 3rd party aircraft models have multiple levels of detail which allow the AI traffic to be better on frame rates while still being detailed during close looks. There are several prominent freeware developers, Project AI is a respected Civilian Airlineer and air cargo traffic creator along with the very popular World of AI. The most prominent developer of military traffic is Military AI Works (MAIW) which has released many packages and new AI models covering many countries of the world[citation needed]. There is a small niche market in the form of AI boat traffic as well.

Scenery
Scenery add-ons usually involve replacements for existing airports with enhanced and more accurate detail, or large expanses of highly detailed ground scenery for specific regions of the world. Some types of scenery add-ons replace or add structures to the simulator. Both payware and freeware scenery add-ons are very widely available. Airport enhancements, for example, range from simple freeware add-ons that update runways or taxiways to very elaborate payware packages that reproduce every lamp, pavement marking, and structure at an airport with near-total accuracy, including animated effects such as baggage cars or marshalling agents. Geographic scenery enhancements may use detailed satellite photos and 3-D structures to closely reproduce real-world regions, particularly those including large cities, landmarks, or spectacular natural wonders.

Flight networks
Virtual flight networks such as IVAO and VATSIM use special, small add-on modules for Flight Simulator to enable connection to their proprietary networks in multiplayer mode, and to allow for voice and text communication with other virtual pilots and controllers over the network.

Miscellaneous
Some utilities, such as FSUIPC, merely provide useful tweaks for the simulator to overcome design limitations or bugs, or to allow more extensive interfacing with other third-party add-ons. Sometimes certain add-ons require other utility add-ons in order to work correctly with the simulator.
Other add-ons provide navigation tools, simulation of passengers, and cameras that can view aircraft or scenery from any angle, more realistic instrument panels and gauges, and so on.
Some software add-ons provide compatibility with specific hardware, such as game controllers and optical motion sensors.

Availability
A number of websites are dedicated to providing users with add-on files (such as airplanes from real airlines, airport utility cars, real buildings located in specific cities, textures, and city files). The wide availability over the Internet of freeware add-on files for the simulation has encouraged the development of a large and diverse virtual community linked up by design group/enthusiast message boards, online multiplayer flying, and 'virtual airlines'. The presence of the Internet has also facilitated the distribution of payware add-ons for the simulator, with the option of downloading the files reducing distribution costs.
There are many addons that are payware. Scenery enhancements, aircraft, sound packages, utilities, and many other kinds of programs are available under this payment method. Payware addons often tend to have larger feature sets than their freeware counterparts; extensive features are not, however, restricted to payware packages, and a select few freeware packages are renowned for offering the same functionality and professional quality at no cost.

Community involvement

FS2004 in the UK Lake District with VFR (Visual Flight Rules) photo scenery and terrain additional components.
A large community exists for the Microsoft Flight Simulator franchise, partly stemming from the open nature of the simulator structure which allows for numerous modifications to be made. There are also many virtual airlines, where pilots fly their assignments as pilots do in real airlines, as well as worldwide networks for the simulation of air traffic and air traffic control, such as IVAO and VATSIM.

Awards
The success of the Microsoft Flight Simulator series has resulted in Guinness World Records awarding the series 7 world records in the Guinness World Records: Gamer's Edition 2008. These records include "Longest Running Flight Sim Series", "Most Successful Flight Simulator Series", and "Most Expensive Home Flight Simulator Cockpit", which was built by Australian trucking tycoon Matthew Sheil, and cost over US$242,000 to build.