Why Are Supercomputers Important ?
A supercomputer is a computer with very high-level computational capacities compared to a general-purpose computer such as a personal desktop or laptop.
When reading about supercomputing, you will also hear the term HPC or High-Performance Computing. HPC is a general term that includes all the activities and components associated with supercomputers, including aspects such as software and data storage as well as the bare supercomputer hardware.
Supercomputers were first introduced in the 1960s by Seymour Roger Cray at Control Data Corporation (CDC), and have been intensively used in science and engineering ever since. To keep track of the state-of-the-art, the supercomputing community looks to the Top 500 list, which ranks the fastest 500 supercomputers in the world every six months..
It may surprise you to learn that supercomputers are built with the same basic elements that you normally find on your desktops, such as processors, memory, and disk. The difference is largely a matter of scale. The reason is quite simple: the cost of developing new hardware is measured in billions of dollars, and the market for consumer products is vastly larger than that for supercomputing, so the most advanced technology you can find is actually what you find in general-purpose computers.
Supercomputers Calculate difficult problems using parallel computing. Performing computations in parallel mean carrying out many calculations simultaneously. It is like having thousands of general-purpose computers all working for you on the same problem at the same time. This is, in fact, an excellent analogy for how modern supercomputers work. Although supercomputers provide enormous computational capacities, they are also very expensive to develop, purchase and to operate. For example, the typical power consumption of a supercomputer is in the order of several Mwatts, where a Mwatt is enough to power a small town of around 1000 people. That is why it’s important to use them as efficiently as possible.
Some problems are actually too large, too distant or too dangerous to study directly: we cannot experiment on the earth’s weather to study climate change, we cannot travel thousands of light years into space to watch two galaxies collide, and we cannot dive into the centre of the sun to measure the nuclear reactions that generate its enormous heat. However, one supercomputer can run different computer programs that reproduce all of these experiments inside its own memory. This gives the modern scientist a powerful new tool to study the real world in a virtual environment. The process of running a virtual experiment is called computer simulation.
Computational Science
It’s important to be clear about the difference between computational science and computer science. Computer science is the scientific study of computers: it is through the work of computer scientists that we have the hardware and software required to build and operate today’s supercomputers. In computational science, however, we use these supercomputers to run computer simulations and make predictions about the real world. To make things even more confusing, if a computer scientist said computer simulation then they would probably mean simulating a computer, i.e. running a simulation of a computer (not of the real world), which is something you might do to check that your design for a new microprocessor works correctly before going into production.It is important to understand that large-scale computer simulation has applications in industry, applied engineering, and commerce as well as academia. Modern cars and airplanes are designed and tested almost entirely by computer before they are ever constructed. A new car must pass crash safety tests before going to market: destructive testing of a new car is an expensive process in itself, but not nearly as expensive as having to redesign it should it fail the test. Computer simulation enables us to design new products that are much more likely to work correctly the very first time they are built.
When we talk about a processor, we mean the central processing unit (CPU) in a computer which can also be considered as the computer’s brain. The CPU carries out the instructions of a computer program. The terms CPU and processor are generally used interchangeably. The slightly confusing thing is that a modern CPU actually contains several independent brains; it is really a collection of several separate processing units, so we really need another term to avoid confusion. We will call each independent processing unit a CPU-core; some people just use the term core.
A modern domestic device (e.g. a laptop, mobile phone or iPad) will usually have a few CPU-cores (perhaps two or four), while a supercomputer has tens or hundreds of thousands of CPU-cores. As mentioned before, a supercomputer gets its power from all these CPU-cores working together at the same time - working in parallel (the normal mode of operation, a single CPU-core doing a single computation, is called serial computing). Interestingly, the same approach is used for computer graphics - the graphics processor (or GPU) on a home games console will have hundreds of CPU-cores. Special-purpose processors like GPUs are now being used to increase the power of supercomputers - in this context they are called accelerators.
To use all of these CPU-cores together means they must be able to talk to each other. In a supercomputer, connecting very large numbers of CPU-cores together requires a communications network, which is called the interconnect in the jargon of the field. A large parallel supercomputer may also be called a Massively Parallel Processor or MPP.
Comments
Post a Comment