Major Distributed Computing Technologies

Distributed Computing:

Distributed computing is a model in which components of a software system are shared among multiple computers to improve efficiency and performance. It is a field of computer science that studies distributed systems. In distributed system components are located on different networked computers.

cloud-computing-2153286_960_720.png
Distributed computing (source)

Three Major Distributed Computing Technologies:

There are three major distributed computing technology which are given below:

Mainframes:

Mainframes were the first example of large computing facilities which leverage multiple processing units. They are powerful, highly reliable computers specialized for large data movement and large I/O operations. Mainframes are mostly used by large organizations for bulk data processing such as online transactions, enterprise resource planning and other big data operations. They are not considered as a distributed system; however they can perform big data processing and operations due to their high computational power by multiple processors. One of the most attractive features of mainframe was the ability to be highly reliable computers that were always on and capable of tolerating failures transparently. Furthermore, system shutdown is not required to change its component. Batch processing is the important application of mainframes. Their popularity has been reduced nowadays.

Is the mainframe dead? - Monitis Blog
Mainframe

Clusters:

Clusters have started as the low-cost alternative to the mainframes and supercomputer. Due to advancement of technology in mainframes and supercomputers, other hardware’s and machines have become cheap which are connected by high bandwidth networks controlled by specific software tools that manage the messaging system. Since the 1980s cluster has become standard technology for parallel and high-performance computing. Due to their low investment cost different research institutions, companies, universities now a day use clusters. This technology contributed to the evolution of tools and framework for distributed computing like Condor, PVM, MP. One of the attractive features of clusters is the cheap machines with high computational power to solve the problem. And clusters are scalable. Example of a cluster is amazon EC2 clusters to process data using Hadoop which has multiple nodes(machines) with master nodes and data nodes and we can scale it if we have a big volume of data.

Cluster Computing : Definition, Types, Advantages & Applications
Cluster

Grids:

They appeared in the early 1990’s as the evolution of cluster computing. Grid computing can have an analogy with electric power grid which is an approach to use high computational power, storage services and other variety of services. Users can consume resources in the same way as use of utilities such as power, gas and water. Grids initially developed aggregation of geographically dispersed clusters by means of internet connections and clusters belonging to different organizations and arrangement is made to share computational power between those organizations. Grid is dynamic aggregation of heterogeneous computing nodes which can be both nationwide and worldwide. Different development in technology has made possible in diffusion of computing grids which are:

    • becoming cluster as common resources
    • underutilization
    • Some problems with higher computational requirement and seems impossible from single cluster
    • high band network, long distance connectivity
How Grid Computing Works | HowStuffWorks
Grid

Distributing technology has led to the development of cloud computing.

References: Mastering Cloud computing book

About sgc908

Graduate Research Assistant at North Dakota State University, Precision Agriculture, Machine Learning, Deep Learning and Big Data.

View all posts by sgc908 →