Hello folks,i am Sushil and today i will be continuing with the topic supercomputers . I have already discussed the sectors where these are used and their introduction. 

Today i will be telling you the hardware part and the software part that becomes interesting as we start exploring it in deep.

Let’s start.

The processors are aligned in a pattern called as grid computing.Grid computing uses a large number of computers in distributed, diverse administrative domains. It is an opportunistic approach which uses resources whenever they are available. An example is BOINC a volunteer-based, opportunistic grid system. Some BOINC applications have reached multi-petaflop levels by using close to half a million computers connected on the Internet, whenever volunteer resources become available. However, these types of results often do not appear in the TOP500 ratings because they do not run the general purpose Linpack benchmark.

Although grid computing has had success in parallel task execution, demanding supercomputer applications such as weather simulations or computational fluid dynamics have remained out of reach, partly due to the barriers in reliable sub-assignment of a large number of tasks as well as the reliable availability of resources at a given time.

In quasi-opportunistic supercomputing a large number of geographically disperse computers are orchestrated with built-in safeguards. The quasi-opportunistic approach goes beyond volunteer computing on a highly distributed systems such as BOINC, or general grid computing on a system such as Globus by allowing the middleware to provide almost seamless access to many computing clusters so that existing programs in languages such as Fortran or C can be distributed among multiple computing resources.

Quasi-opportunistic supercomputing aims to provide a higher quality of service than opportunistic resource sharing.The quasi-opportunistic approach enables the execution of demanding applications within computer grids by establishing grid-wise resource allocation agreements; and fault tolerant message passing to abstractly shield against the failures of the underlying resources, thus maintaining some opportunism, while allowing a higher level of control.

architecturecloudlinkssamesite

How ever these are the same since the 1800s but there are some changes that have taken place in the 21st century which are worth mentioning . these computers ofcourse require the best cooling environment that can be provided and would help the processors to perform at its best.

The K computer is a water-cooled, homogeneous processor, distributed memory system with a cluster architecture.It uses more than 80,000 SPARC64 VIIIfx processors, each with eight cores, for a total of over 700,000 cores—almost twice as many as any other system. It comprises more than 800 cabinets, each with 96 computing nodes (each with 16 GB of memory), and 6 I/O nodes. Although it is more powerful than the next five systems on the TOP500 list combined, at 824.56 MFLOPS/W it has the lowest power to performance ratio of any current major supercomputer system.The follow up system for the K computer, called the PRIMEHPC FX10 uses the same six-dimensional torus interconnect, but still only one processor per node.

Unlike the K computer, the Tianhe-1A system uses a hybrid architecture and integrates CPUs and GPUs. It uses more than 14,000 Xeon general-purpose processors and more than 7,000 Nvidia Tesla general-purpose graphics processing units(GPGPUs) on about 3,500 blades. It has 112 computer cabinets and 262 terabytes of distributed memory; 2 petabytes of disk storage is implemented via Lustre clustered files.Tianhe-1 uses a proprietary high-speed communication network to connect the processors. The proprietary interconnect network was based on the Infiniband QDR, enhanced with Chinese made FeiTeng-1000 CPUs. In the case of the interconnect the system is twice as fast as the Infiniband, but slower than some interconnects on other supercomputers.

The limits of specific approaches continue to be tested, as boundaries are reached through large scale experiments, e.g., in 2011 IBM ended its participation in the Blue Waters petaflops project at the University of Illinois.The Blue Waters architecture was based on the IBM POWER7 processor and intended to have 200,000 cores with a petabyte of “globally addressable memory” and 10 petabytes of disk space. The goal of a sustained petaflop led to design choices that optimized single-core performance, and hence a lower number of cores. The lower number of cores was then expected to help performance on programs that did not scale well to a large number of processors. The large globally addressable memory architecture aimed to solve memory address problems in an efficient manner, for the same type of programs. Blue Waters had been expected to run at sustained speeds of at least one petaflop, and relied on the specific water-cooling approach to manage heat. In the first four years of operation, the National Science Foundation spent about $200 million on the project. IBM released the Power 775 computing node derived from that project’s technology soon thereafter, but effectively abandoned the Blue Waters approach.

Architectural experiments are continuing in a number of directions, e.g. the Cyclops64 system uses a “supercomputer on a chip” approach, in a direction away from the use of massive distributed processors. Each 64-bit Cyclops64 chip contains 80 processors, and the entire system uses a globally addressable memory architecture. The processors are connected with non-internally blocking crossbar switch and communicate with each other via global interleaved memory. There is no data cache in the architecture, but half of each SRAM bank can be used as a scratchpad memory. Although this type of architecture allows unstructured parallelism in a dynamically non-contiguous memory system, it also produces challenges in the efficient mapping of parallel algorithms to a many-core system.

These are the biggest computers with the largest processors and also best computing machines on this earth as of now.

Let’s list the top 5 supercomputers that are available and also their prices:

  1. Tianhe-2 (MilkyWay-2)

In the war of supercomputer manufacturing, Chinese Tianhe-2 (TH-2 or Milky way2) dominated and has on top of the list since June 2013, remain undefeated till to date. Tianhe-2 is a 33.86 petaflops supercomputer developed by a team of 1300 scientists and engineers located in University of Sun-Yat-sen, Guangzhou, China. This supercomputer has 3,120,000 core processors.

datacenter.jpg

Country: China
Site: National University of Defense Technology (NUDT)
Manufacturer: National University of Defense Technology (NUDT)
Cores: 3,120,000
Linpack Performance (Rmax): 33,862.7 TFlop/s
Theoretical Peak (Rpeak): 54,902.4 TFlop/s
Power: 17,808.00 kW
Memory: 1,024,000 GB
Operating System: Kylin Linux
Purpose/Usage: For local weather service and the National Offshore Oil Corporation
Cost: 2.4 billion Yuan or 3 billion Hong Kong dollars (390 million US Dollars) 

2.Titan

Titan supercomputer is 560,640-core computer which is actually an upgraded version of Jaguar supercomputer developed by an American Supercomputer manufacturer Cray Research Inc. (CRI) located at Oak Ridge National Laboratory. The initial cost of the unit were approximately $60 million, funded by U.S Department of Energy (DoE) which is raised to $97millions due to addition of Storage system.

titan

 Country: U.S.
Site: DOE/SC/Oak Ridge National Laboratory
Manufacturer: Cray Inc.
Cores: 560,640
Linpack Performance (Rmax): 17,590.0 TFlop/s
Theoretical Peak (Rpeak): 27,112.5 TFlop/s
Power: 8,209.00 kW
Memory: 710,144 GB
Operating System: Cray Linux Environment
Purpose/Usage: Use to simulate Molecular physics, energy, activity and interaction between electron and atom, global atmosphere modeling.
Cost: $100 million
3. Sequoia

Sequoia – Blue Gene/Q supercomputer manufactured by IBM for National Nuclear Security Administration (NSSA) at Lawrence Livermore National Laboratory (LLNL) is deployed over the site in June 2012 and gain the world #1supercomputer award.

ibm-sequoia-625x300.jpg

Country: U.S.
Site: Lawrence Livermore National Laboratory
Manufacturer: IBM
Cores: 1,572,864
Linpack Performance (Rmax): 17,173.2 TFlop/s
Theoretical Peak (Rpeak): 20,132.7 TFlop/s
Power: 7,890.00 kW
Memory: 1,572,864 GB
Operating System: Linux
Purpose/Usage: Nuclear weapons simulation, energy, astronomy, study of the human genome, and climate change.
Cost: $250 million dollar

4. K computer

k_xomputer.jpg

Country: Japan
Site: RIKEN Advanced Institute for Computational Science (AICS)
Manufacturer: Fujitsu
Cores: 705,024
Linpack Performance (Rmax): 10,510.0 TFlop/s
Theoretical Peak (Rpeak): 11,280.4 TFlop/s
Power: 12,659.89 kW
Memory: 1,410,048 GB
Operating System: Linux
Purpose/Usage: Climate research, Medical researches and Disaster prevention
Cost: $1.2 billion US dollar

5. Mira

30292D004-72dpi_0.jpg

Country: U.S.
Site: DOE/SC/Argonne National Laboratory
Manufacturer: IBM
Cores: 786,432
Linpack Performance (Rmax): 8,586.6 TFlop/s
Theoretical Peak (Rpeak): 10,066.3 TFlop/s
Power: 3,945.00 kW
Operating System: Linux
Purpose/Usage: Used for scientific research in the fields of material science, climatology and computational chemistry.
Cost: $50 million dollar

Well the price figures are as big as the space these computers take.

Hope you guys find it astonishing as computing with these beasts is surprisingly fast .

Share the blog and tell me about your views on this topics. 

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s