Over
the years many definitions for the notion of
“supercomputer” have been given. Some of the most well-known are “the
fastest existing computer at any point in time” and “a computer that has
a performance level that is typically hundreds of times higher than
that of normal commodity computers.” Both definitions have their
drawbacks. In the first definition the object in question is a moving
target because of the fast rate at which new computers are conceived and
built. Therefore, with this definition it is hard to know whether a
certain computer is still
the supercomputer or a new, even faster one has just emerged.
The second definition is vague because it presupposes that one can
easily determine the performance level of a computer, which is by no
means true, and furthermore, the performance factor that should
discriminate between supercomputers and commodity computers is also not
easily established. Indeed, it is not even straightforward to define
what is meant by the term “commodity computer.”
Should a supercomputer be measured against a PC, used mainly for word
processing, or against a workstation used for technical computations?
Faster Standard Computing
Still, it is obvious that, whatever definition is used, one expects
supercomputers to be significantly faster on any task than the computers
to which one is normally exposed. In that sense the second definition
is more appropriate. Therefore, we adhere mainly to this rather vague
definition, with the addition that supercomputers have a special
architecture to enable them to be faster than the standard computing equipment we use every day.
The architecture, that is, the high-level structure in terms of its
processors, its memory modules, and the interconnection network between
these elements, largely determines its performance and, as such, whether
or not it is a supercomputer. Other defining features of the
architecture are the instruction set of the computer and the
accessibility of the components in the architecture from the
programmer’s point of view.
Moore’s Law
It is good to realize that even for commodity computers the speed is
continuously increasing because of the processor speed, which, according
to Moore’s law, is doubling every 18 months. Therefore, supercomputers
need to be at least at the same technology curve with respect to the
processor speed and, in addition, employ their architectural advantage
to stay ahead of commodity computers. This is also evident from the
clock cycle in both commodity computers and supercomputers: in both
types of systems the clock cycle is in the range of 1–3 nsec, and it is
not likely that future supercomputers will contain processors with
significantly faster processors because of the enormous additional costs
incurred in the development and fabrication.
Coordination of Processors
As it stands, nowadays the architectural advantage of supercomputers is due almost entirely to
parallelism,
i.e., many processors in a supercomputer are commonly involved in a
single computational task. Ideally, the speedup that is achieved
increases linearly with the number of processors that are contributing
to such a computational task. Because of the time spent in the
coordination of the processors, this linear increase in speed is
seldomly observed. Nevertheless, parallelism enables us to tackle
computational problems that would be simply un-thought of without it.