Operating System
Computer Architecture Support to OS:-
A computer system can be organized in several different ways, which we can categorize roughly according to the number of general-purpose processors used.
1)Single-Processor System:-
Until recently, most computer systems used a single processor. On a single processor system, there is one main CPU capable of executing a general-purpose instruction set, including instructions from user processes. Almost all single-processor systems have other special-purpose processors as well. They may come in the form of device-specific processors, such as disk, keyboard, and graphics controllers; or, on mainframes, they may come in the form of more general-purpose processors, such as I/O processors that move data rapidly among the components of the system.
2)Multiprocessor System:-
Within the past several years, multiprocessor systems (also known as parallel systems or multicore systems) have begun to dominate the landscape of computing. Such systems have two or more processors in close communication, sharing the computer bus and sometimes the clock, memory, and peripheral devices. Multiprocessor systems first appeared prominently in servers and have since migrated to desktop and laptop systems. Recently, multiple processors have appeared on mobile devices such as smartphones and tablet computers.
Multiprocessor systems have three main advantages:-
1)Increased Throughput
2)Economy of scale
3)Increased Reliability
3)Clustered System:-
Clustering is usually used to provide high-availability service— that is, the service will continue even if one or more systems in the cluster fail. Generally, we obtain high availability by adding a level of redundancy in the system. A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others (over the LAN). If the monitored machine fails, the monitoring machine can take ownership of its storage and restart the applications that were running on the failed machine. The users and clients of the applications see only a brief interruption of service.
Clustering can be structured asymmetrically or symmetrically. In asymmetric clustering, one machine is in hot-standby mode while the other is running the applications. The hot-standby host machine does nothing but monitors the active server. If that server fails, the hot-standby host becomes the active server. In symmetric clustering, two or more hosts are running applications and monitoring each other. This structure is obviously more efficient, as it uses all of the available hardware. However, it does require that more than one application be available to run.
Process Virtualization:-
2)Process API:-The process API consists of calls programs can make related to processes. Typically, this includes creation, destruction, and other useful calls.
a)fork() system call:-The fork() system call is used to create a new process [C63]. However, be forewarned: it is certainly the strangest routine you will ever call. More specifically, you have a running program whose code, examine the code, or better yet, type it in and run it yourself!
b)wait() system call:-So far, we haven’t done much: just created a child that prints out a message and exits. Sometimes, as it turns out, it is quite useful for a parent to wait for a child process to finish what it has been doing. This task is accomplished with the wait() system call (or its more complete sibling waitpid());
c)exec() system call:- A final and important piece of the process creation API is the exec() system call. This system call is useful when you want to run a program that is different from the calling program.
Example:-
#include<stdio.h>
#include<stdlib.h>
CPU Scheduling:-
CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. It includes an algorithm on the basis of which the scheduling of the process is determined. CPU Scheduling is a process that allows one process to use the CPU while another process is delayed (in standby) due to the unavailability of any resources such as I / O etc, thus making full use of the CPU. The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer. There are many different CPU-scheduling algorithms.
a)First-Come, First-Served Scheduling
b)Shortest-Job-First Scheduling
c)Priority Scheduling
d)Round-Robin Scheduling
e)Multilevel Queue Scheduling
f)Multilevel Feedback Queueing Scheduling
Multi-Level Feedback:-
Normally, when the multilevel queue scheduling algorithm is used, processes are permanently assigned to a queue when they enter the system. If there are separate queues for foreground and background processes. The multilevel feedback queue scheduling algorithm, in contrast, allows a process to move between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it will be moved to a lower-priority queue.
Lottery Scheduling Code:-
Lottery scheduling is a novel technique proposed for sharing a resource in a probabilistically fair manner. Lottery “tickets” are distributed to all processes sharing a resource in such a manner that a process gets as many tickets as its fair share of the resource. For example, a process would be given five tickets out of a total of 100 tickets if its fair share of the resource is 5 percent.
When the resource is to be allocated, a lottery is conducted among the tickets held by processes that actively seek the resource. The process of holding the winning ticket is then allocating the resource. The actual share of the resources allocated to the process depends on contention for the resource. Lottery scheduling can be used for fair share CPU scheduling as follows: Tickets can be issued to applications (or users) on the basis of their fair share of CPU time. An application can share its tickets among its processes in any manner it desires.
To allocate a CPU time slice, the scheduler holds a lottery in which only tickets of ready processes participate. When the time slice is a few milliseconds, this scheduling method provides fairness even over fractions of a second if all groups of processes are active.
Multi-Processor Scheduling:-
A Multi-processor is a system that has more than one processor but shares the same memory, bus, and input/output devices. Multiprocessor Scheduling involves multiple CPUs, due to which Load sharing is possible. Load Sharing is the balancing of load between numerous processors. It is more complex as compared to Single- Processor Scheduling.
- Symmetric Multiprocessing: In Symmetric Multiprocessing, all processors are self-scheduling. The scheduler for each processor checks the ready queue and selects a process to execute. Each of the processors works on a duplicate copy of the operating system and communicates with each other. If one of the processors goes down, the rest of the system keeps working.
- Asymmetric Multiprocessing: In asymmetric Multiprocessing, all the I/O operations and scheduling decisions are performed by a single processor called the Master server, and the remaining processors process the user code only. The need for data sharing can be reduced by using this method.