Home

                                    Operating System

Computer Architecture Support to OS:-

                         A computer system can be organized in several different ways, which we can categorize roughly according to the number of general-purpose processors used.

1)Single-Processor System:-

                    Until recently, most computer systems used a single processor. On a single processor system, there is one main CPU capable of executing a general-purpose instruction set, including instructions from user processes. Almost all single-processor systems have other special-purpose processors as well. They may come in the form of device-specific processors, such as disk, keyboard, and graphics controllers; or, on mainframes, they may come in the form of more general-purpose processors, such as I/O processors that move data rapidly among the components of the system.


2)Multiprocessor System:-

                    Within the past several years, multiprocessor systems (also known as parallel systems or multicore systems) have begun to dominate the landscape of computing. Such systems have two or more processors in close communication, sharing the computer bus and sometimes the clock, memory, and peripheral devices. Multiprocessor systems first appeared prominently in servers and have since migrated to desktop and laptop systems. Recently, multiple processors have appeared on mobile devices such as smartphones and tablet computers.

Multiprocessor systems have three main advantages:-

    1)Increased Throughput

    2)Economy of scale

    3)Increased Reliability


3)Clustered System:-

                            Clustering is usually used to provide high-availability service— that is, the service will continue even if one or more systems in the cluster fail. Generally, we obtain high availability by adding a level of redundancy in the system. A layer of cluster software runs on the cluster nodes. Each node can monitor one or more of the others (over the LAN). If the monitored machine fails, the monitoring machine can take ownership of its storage and restart the applications that were running on the failed machine. The users and clients of the applications see only a brief interruption of service.

Clustering can be structured asymmetrically or symmetrically. In asymmetric clustering, one machine is in hot-standby mode while the other is running the applications. The hot-standby host machine does nothing but monitors the active server. If that server fails, the hot-standby host becomes the active server. In symmetric clustering, two or more hosts are running applications and monitoring each other. This structure is obviously more efficient, as it uses all of the available hardware. However, it does require that more than one application be available to run.


Process Virtualization:-

1)Process:- The abstraction provided by the OS of a running program is something we will call a process.

2)Process API:-The process API consists of calls programs can make related to processes. Typically, this includes creation, destruction, and other useful calls.

    a)fork() system call:-The fork() system call is used to create a new process [C63]. However, be forewarned: it is certainly the strangest routine you will ever call. More specifically, you have a running program whose code, examine the code, or better yet, type it in and run it yourself!

    b)wait() system call:-So far, we haven’t done much: just created a child that prints out a message and exits. Sometimes, as it turns out, it is quite useful for a parent to wait for a child process to finish what it has been doing. This task is accomplished with the wait() system call (or its more complete sibling waitpid());

    c)exec() system call:- A final and important piece of the process creation API is the exec() system call. This system call is useful when you want to run a program that is different from the calling program.

Example:-

#include<stdio.h>

#include<stdlib.h>

 #include<unistd.h>

 #include<string.h>

#include<sys/wait.h>

 int main(int argc, char *argv[]) 

     printf("hello world (pid:%d)\n", (int) getpid());

     int rc = fork(); 

    if (rc < 0) 

    

        // fork failed; 

        exit 

         fprintf(stderr, "fork failed\n"); 

        exit(1); 

     } 

    else if (rc == 0)

     {    

            // child (new process) 

            printf("hello, I am child (pid:%d)\n", (int) getpid());  

            char *myargs[3]; 

             myargs[0] = strdup("wc"); // program: "wc" (word count) 

             myargs[1] = strdup("p3.c"); // argument: file to count  

            myargs[2] = NULL; // marks end of array  

            execvp(myargs[0], myargs); // runs word count    

            printf("this shouldn’t print out");  

        }

         else 

        

            // parent goes down this path (main)  

            int rc_wait = wait(NULL); 

            printf("hello, I am parent of %d (rc_wait:%d) (pid:%d)\n", 

             rc, rc_wait, (int) getpid()); 

         } 

          return 0; 

 } 


Direct Execution:-The “direct execution” part of the idea is simple: just run the program directly on the CPU. Thus, when the OS wishes to start a program running, it creates a process entry for it in a process list, allocates some memory for it, loads the program code into memory (from disk), locates its entry point (i.e., the main() routine or something similar), jump to it, and starts running the user’s code. Figure 6.1 shows this basic direct execution protocol (without any limits, yet), using a normal call and return to jump to the program’s main() and later back into the kernel.


CPU Scheduling:-

                CPU scheduling is the basis of multi-programmed operating systems. By switching the CPU among processes, the operating system can make the computer more productive. It includes an algorithm on the basis of which the scheduling of the process is determined. CPU Scheduling is a process that allows one process to use the CPU while another process is delayed (in standby) due to the unavailability of any resources such as I / O etc, thus making full use of the CPU. The purpose of CPU Scheduling is to make the system more efficient, faster, and fairer. There are many different CPU-scheduling algorithms.

    a)First-Come, First-Served Scheduling

    b)Shortest-Job-First Scheduling

    c)Priority Scheduling

    d)Round-Robin Scheduling

    e)Multilevel Queue Scheduling

    f)Multilevel Feedback Queueing Scheduling

    

Multi-Level Feedback:-

        Normally, when the multilevel queue scheduling algorithm is used, processes are permanently assigned to a queue when they enter the system. If there are separate queues for foreground and background processes. The multilevel feedback queue scheduling algorithm, in contrast, allows a process to move between queues. The idea is to separate processes according to the characteristics of their CPU bursts. If a process uses too much CPU time, it will be moved to a lower-priority queue.


Lottery Scheduling Code:-

Lottery scheduling is a novel technique proposed for sharing a resource in a probabilistically fair manner. Lottery “tickets” are distributed to all processes sharing a resource in such a manner that a process gets as many tickets as its fair share of the resource. For example, a process would be given five tickets out of a total of 100 tickets if its fair share of the resource is 5 percent.

 When the resource is to be allocated, a lottery is conducted among the tickets held by processes that actively seek the resource. The process of holding the winning ticket is then allocating the resource. The actual share of the resources allocated to the process depends on contention for the resource. Lottery scheduling can be used for fair share CPU scheduling as follows: Tickets can be issued to applications (or users) on the basis of their fair share of CPU time. An application can share its tickets among its processes in any manner it desires.

 To allocate a CPU time slice, the scheduler holds a lottery in which only tickets of ready processes participate. When the time slice is a few milliseconds, this scheduling method provides fairness even over fractions of a second if all groups of processes are active.


Multi-Processor Scheduling:-

 A Multi-processor is a system that has more than one processor but shares the same memory, bus, and input/output devices. Multiprocessor Scheduling involves multiple CPUs, due to which Load sharing is possible. Load Sharing is the balancing of load between numerous processors. It is more complex as compared to Single- Processor Scheduling.


There are two types of Multi-Processor Scheduling. They are:-

  1. Symmetric Multiprocessing: In Symmetric Multiprocessing, all processors are self-scheduling. The scheduler for each processor checks the ready queue and selects a process to execute. Each of the processors works on a duplicate copy of the operating system and communicates with each other. If one of the processors goes down, the rest of the system keeps working.

  1. Asymmetric Multiprocessing: In asymmetric Multiprocessing, all the I/O operations and scheduling decisions are performed by a single processor called the Master server, and the remaining processors process the user code only. The need for data sharing can be reduced by using this method.