Distributed computing systems

Updated 1 year, 5 months ago.

It is a collection of computers interconnected by a communication network in which each processor has its own local memory and the communication between any two processors takes place through the communication network.

In computer science, distributed computing is a field of study that focuses on the design and implementation of systems that distribute computation across multiple devices.

How distributed computing systems work

Computers in these systems work together to complete tasks. More processing power and storage is provided than when using a single computer. There are two main types of distributed systems:

1. Client-server architecture: In this type of system, the clients request and receive data from the server. The server manages the data and distributes it to the clients.

2. P2P architecture: In this type of system, each node in the network can act as both a client and a server. The nodes share data and resources with each other without the need for a central server.


DISTRIBUTED COMPUTING MODELS

A distributed computing model is a model where computing is done across a network of computers, rather than on a single centralized computer. The various models are :

  • Minicomputer model
  • Workstation model
  • Workstation server model
  • Processor pool model
  • Hybrid model

DISTRIBUTED COMPUTING MODELS EXPLAINED

1. MINICOMPUTER MODEL -example ARPANET

It consists of a few minicomputers interconnected by a communication network, each minicomputer has multiple users simultaneously logged on to the machine by means of interactive terminals with remote access to other minicomputers.

minicomputer model

The minicomputer model may be used when resource sharing (such as sharing of information databases of different types, with each type of database located on a different machine) with remote users is desired. The minicomputer model is often used for high-performance computing applications such as weather forecasting, scientific simulation, and financial modeling.


2. WORKSTATION MODEL

Consists of several workstations interconnected by a communication network. Each workstation is equipped with its own disk and serving as a single user computer. Each workstation is interconnected to others through the communication network by means of high speed LAN so that workstations that will be idle be able to be used to process jobs of users who are logged in with less computing power on their workstations.

The workstation model has several advantages. First, each computer can be dedicated to running a single application or a small number of applications. This makes it easier to manage the software and ensure that it runs correctly. Second, the workstation model can be scaled up easily by adding more computers to the network. Third, this model is very fault-tolerant; if one computer goes down, the others can continue to operate without interruption.

There are also some disadvantages to the workstation model. First, it can be expensive to set up and maintain, since each computer requires its own operating system and software. Second, the network connecting the computers can be a bottleneck if there is a lot of traffic.

workstation-model

ISSUES IN IMPLEMENTING THIS MODEL

  • How to find an idle workstations?
  • How is a process transferred from one workstation to get it executed on another workstation?
  • What happens to a remote process if a user logs onto a workstation that was idle until now and was being used to execute a process of another workstation?

3. WORKSTATION-SERVER MODEL

Consists of a few minicomputers and several workstations interconnected by a communication network. Most of the workstations are diskless and a few diskfull.

A diskfull workstation is one that has its local disk while a diskless workstation has no local disk. The minicomputers in this model are used as servers to provide one or more services such as shared resources. The minicomputers mostly act as file servers, print servers, and database servers.

. This model is often used in organizations where each user has their own dedicated computer, but there is also a central server that provides shared resources such as files, printers, and email.

workstation-server model

Advantages of workstation-server model

  1. It is cheap to employ a few minicomputers equipped with large, fast disks that are accessed over the network than a large number of diskfull workstations each having a small slow disk.
  2. System maintenance is easy since activities such as software updates are done on the server side, easy backup, and hardware repair or reinstallation.
  3. Flexible since a user can use any workstation to access resources.
  4. Guaranteed response time since no workstations are used for remote execution of tasks.

Disadvantages of workstation server model

The workstation server model can also have some disadvantages. One potential issue is that if the central server goes down, all users will lose access to the shared resources. Another concern is that if one user's computer is overloaded with requests from other users, it can slow down the system for everyone.


4. PROCESSOR POOL MODEL

This model consists of a large number of microcomputers and minicomputers attached to the network. The processors are pooled together to be shared among users. Each processor in the pool has its own memory to load and run a system program. Terminals are not directly connected to the processors but attached to the network via special devices.

A run server is used to manage and allocate the processors to users according to demand. For example, if the user's computation job is the compilation of a program having n segments, in which each of the segments can be compiled independently to produce separate relocatable object files, n processors from the pool can be allocated to this job to compile all the n segments in parallel. When the computation is completed, the processors are returned to the pool for use by other users.

processor pool model

The advantage of this model is that it makes maximum utilization of the resources available.

The disadvantage is the slow response of the system between the user machine and the system leading to inconvenience while working with graphics.


5. THE HYBRID MODEL

It is a model that combines the advantages of workstation server model with those of pool of processors. It is a combination of workstation server model and pool of processors model.

hybrid-model

The processors in the pool are allocated for computations that are too large to be processed by workstations or require several computors concurrently for execution. The model gives guaranteed response to interactive jobs by allowing them to be processed local workstations of users.

There are some challenges that come with using a hybrid model, however. One challenge is that it can be difficult to manage and maintain such a complex system. There is also a greater potential for security vulnerabilities in a hybrid system.

WHY DISTRIBUTED COMPUTING SYSTEMS ARE GAINING POPULARITY

INHERENTLY DISTRIBUTED APPLICATIONS

Inherent applications require some processing power be available at many distributed locations for collecting, preprocessing, and accessing data, resulting in the need for distributed computing systems.

Examples of such are computerized banking systems, world wide airline reservation system, and factory automation system controlling robots and machines along an assembly line.

INFORMATION SHARING

Information generated by one of the users can be easily and efficiently shared by users working at other nodes of the system. Groupware, the use of distributed computing systems by a group of users to work cooperatively, is a major promise for software developers.

RESOURCE SHARING

Sharing of resources such as software libraries and databases as well as hardware resources such as printers, hard disks.

BETTER PRICE-PERFORMANCE RATIO

Resource sharing help employ fewer devices in a network. Expensive devices such as laser printers, high speed storage devices, can be shared among users reducing the expenses of buying each for a single user.

SHORTER RESPONSE TIME AND HIGHER THROUGHPUT

Distributed computing devices are responsive and give better output than stand alone devices. Complex problems can be solved by means of parallel connection. Multiple tasks can be executed at the same time, one at local machine the other at remote node.

HIGHER RELIABILITY

Distributed computing systems have higher degree of tolerance against errors and system failure. Multiplication of storage devices and processors allows redundancy in the system.

EXTENSIBILITY AND INCREMENTAL GROWTH

It is possible to gradually extend the power and functionality by simply adding additional resources, both hardware and software as need arises. This is mostly possible in open distributed systems, since increased workload may need additional processors.

BETTER FLEXIBILITY AND MEETING USER'S NEEDS

In distributing computing systems, computers with less computational power are used for ordinary data processing jobs, whereas high-performance computers are used for complex mathematical computations. The appropriate computers are allocated specific tasks.

DISTRIBUTED OPERATING SYSTEMS

An operating system is a program that controls the resources of a computer system and provides its users with an interface or virtual machine that is more convenient to use rather than the bare machine.

The primary task of the OS is to provide users with a virtual machine that is easier to program than the underlying hardware. The other task is to manage the resources in the device.

Process management is responsible for creating, destroying, and managing processes. This includes keeping track of which processes are running on which nodes, as well as scheduling process execution.

Resource management is responsible for allocating and deallocating resources among processes. This includes things like CPU time, memory, and I/O devices.

Device management is responsible for managing access to devices by processes. This includes ensuring that only authorized processes can access devices, and that access is properly synchronized between processes.

Security is responsible for protecting the system from unauthorized access and malicious activity. This includes things like authentication and authorization, as well as intrusion detection and prevention.

Operating systems in a distributed computing system are of two types:
Network Operating system
Distributed operating system

HOW TO DIFFERENTIATE NETWORK OPERATING SYSTEM FROM DISTRIBUTED OPERATING SYSTEM

1. System Image

The selection of the machine for executing a job is entirely manual in the case of network operating system while it is automatic in distributed operating system. Control over file placement is done manually by users in network operating system but automatically in distributed operating systems.

Users in network operating systems are aware that different devices are used in the system while in distributed operating systems users are unaware. The computers are hidden and provided as a single system image.


2. Autonomy

The degree of autonomy of each machine of a distributed computing system that uses a network operating system is considerably higher as compared to machines that use distributed operating systems. Network operating systems uses different operating systems while distributed operating system use one operating system in the whole system. The kernel, a program that supports a set of system calls, manages and controls the hardware of the operating system.


3. Fault Tolerance capability

Distributed operating system has a higher fault tolerance capability as compared to network operating system. For example, if ten percent of machines fail in a network operating system, ten percent of users will not be able to work, while if ten percent of machines fail in a distributed operating system, all users will be able to work only performance will be reduced by ten percent.

Classifying distributed computing systems according to operating systems, to categories are obtained.
NETWORK SYSTEM - Is a distributed computing system that uses a network operating system.
TRUE DISTRIBUTED SYSTEM - Uses a distributed operating system.


ISSUES IN DESIGNING A DISTRIBUTED OPERATING SYSTEM

1. Transparency

  • Access transparency
  • Location transparency
  • Replication transparency
  • Failure transparency
  • Migration transparency
  • Concurrency transparency
  • Performance transparency
  • Scaling transparency

2. Reliability

  • Fault avoidance
  • Fault tolerance
  • Fault detection and recovery

3. Flexibility

  • Ease of modification
  • Ease of enhancement

4. Performance

5. Scalability

6. Heterogeneity






Comments

No comments


Contact Us

Our team is ready to serve you.

Our Newsletter

Get Weekly Updates