Friday, February 26, 2010

SUPER COMPUTERS

SUPERCOMPUTERS :
A supercomputer is the fastest type of computer. Supercomputers are very expensive and are employed for specialized applications that require large amounts of mathematical calculations. The chief difference between a supercomputer and a mainframe is that a supercomputer channels all its power into executing a few programs as fast as possible, whereas a mainframe uses its power to execute many programs concurrently.

Some Common Uses of Supercomputers :

Supercomputers are used for highly calculation-intensive tasks such as problems involving quantum mechanical physics, weather forecasting, climate research, molecular modeling (computing the structures and properties of chemical compounds, biological macromolecules, polymers, and crystals), physical simulations (such as simulation of airplanes in wind tunnels, simulation of the detonation of nuclear weapons, and research into nuclear fusion), cryptanalysis, and many others. Some supercomputers have also been designed for very specific functions like cracking codes and playing chess; Deep Blue is a famous chess-playing supercomputer. Major universities, military agencies and scientific research laboratories depend on and make use of supercomputers very heavily.

Hardware and Software Design:

Supercomputers using custom CPUs traditionally gained their speed over conventional computers through the use of innovative designs that allow them to perform many tasks in parallel, as well as complex detail engineering. They tend to be specialized for certain types of computation, usually numerical calculations, and perform poorly at more general computing tasks. Their memory hierarchy is very carefully designed to ensure the processor is kept fed with data and instructions at all times - in fact, much of the performance difference between slower computers and supercomputers is due to the memory hierarchy. Their I/O systems tend to be designed to support high bandwidth, with latency less of an issue, because supercomputers are not used for transaction processing.
Supercomputer designs devote great effort to eliminating software serialization, and using hardware to address the remaining bottlenecks.

Supercomputer challenges:
A supercomputer generates large amounts of heat and therefore must be cooled with complex cooling systems to ensure that no part of the computer fails. Many of these cooling systems take advantage of liquid gases, which can get extremely cold.
Another issue is the speed at which information can be transferred or written to a storage device, as the speed of data transfer will limit the supercomputer's performance. Information cannot move faster than the speed of light between two parts of a supercomputer.
Supercomputers consume and produce massive amounts of data in a very short period of time. Much work on external storage bandwidth is needed to ensure that this information can be transferred quickly and stored/retrieved correctly.


Operating Systems and Programming:
Most supercomputers run on a Linux or Unix operating system, as these operating systems are extremely flexible, stable, and efficient. Supercomputers typically have multiple processors and a variety of other technological tricks to ensure that they run smoothly.
Until the early-to-mid-1980s, supercomputers usually sacrificed instruction set compatibility and code portability for performance (processing and memory access speed).For the most part, supercomputers had vastly different operating systems. The Cray-1 alone had at least six different proprietary OSs largely unknown to the general computing community. Similarly different and incompatible vectorizing and parallelizing compilers for Fortran existed.
The base language of supercomputer code is generally Fortran or C, using special libraries to share data between nodes. Software tools for distributed processing include standard APIs and open source-based software solutions which facilitate the creation of a supercomputer from a collection of ordinary workstations or servers.


Processing Speeds:
Supercomputer computational power is rated in FLOPS (Floating Point Operations Per Second). The first commercially available supercomputers reached speeds of 10 to 100 million FLOPS. The next generation of supercomputers is predicted to break the petaflop level. This would represent computing power more than 1,000 times faster than a teraflop machine. A relatively old supercomputer such as the Cray C90 (built in the mid to late 1990s) has a processing speed of only 8 gigaflops. It can solve a problem, which takes a personal computer a few hours, in .002 seconds! From this, we can understand the vast development happening in the processing speed of a supercomputer.
The is dedicated to providing information about the current 500 sites with the fastest supercomputers. Both the list and the content at this site is updated regularly, providing those interested with a wealth of information about the developments in supercomputing technology.

Supercomputer Architecture:
Supercomputer design varies from model to model. Generally, there are vector computers and parallel computers. Vector computers use a very fast data “pipeline” to move data from components and memory in the computer to a central processor. Parallel computers use multiple processors, each with their own memory banks, to 'split up' data intensive tasks.
A vector computer solves a series of problems one by one in a consecutive order whereas a parallel computer solves all the problems parallely as it is equipped with multiple processors. Hence, the parallel computer would be able to solve the problems much quicker than a vector computer.
Other major differences between vector and parallel processors include how data is handled and how each machine allocates memory. A vector machine is usually a single super-fast processor with all the computer's memory allocated to its operation. A parallel machine has multiple processors, each with its own memory.
Vector machines are easier to program, while parallel machines, with data from multiple processors, could have trouble with communication of data between them.
Recently, parallel vector computers have been developed to take advantage of both designs.
Manufacturers of Supercomputers:
There are many manufacturers of good supercomputers and Cray is one among them. Cray provides an informative with product descriptions, photos, company information, and an index of current developments.
IBM produces supercomputers with most cutting-edge technology. For information about IBM supercomputers Their "Blue Gene" supercomputer, is expected to run 15 times faster at 200 teraflops than their current supercomputers. IBM's "Blue Sky" which is called a self-aware supercomputer will be used to work on colossal computing problems such as weather prediction. Additionally, this supercomputer can self-repair, requiring no human intervention.
Intel has developed a line of supercomputers known as Intel TFLOPS. Supercomputers that use thousands of Pentium Pro processors in a parallel configuration to meet the supercomputing demands of their customers. To know more about Intel supercomputers.
R & D on Supercomputers:
IBM is developing the Cyclops64 architecture, intended to create a "supercomputer on a chip".
IBM's 20 PFLOPS supercomputer named "Sequoia" is scheduled to go online in 2011.
Supercomputers are projected to reach 1 exaflops (one quintillion FLOPS) in 2019.
A zettaflops (one sextillion FLOPS) computer required to accomplish full weather modeling might be built around 2030.

1 comment:

  1. Very nice posting. Your article us quite informative. Thanks for the same. Our service also helps you to market your products with various marketing strategies, right from emails to social media. Whether you seek to increase ROI or drive higher efficiencies at lower costs, Pegasi Media Group is your committed partner will provide b2bleads.
    IBM Cyclops64 Users

    ReplyDelete