3.1.1 Process State:
As a process execute, it changes state. During the lifespan of a process, its execution status may be in one of four states: (associated with each state is usually a queue on which the process resides).
Fig. Various State Of Process
- New : The process being created is available in the new state. It is the new state because the system is not permitted it to enter the ready state due to limited memory available in the ready queue. If some memory becomes available, then the process from the new state will go to ready state.
- Ready State: The process which is not waiting for any external event such as I/O operation and which is not running is said to be in ready state. It is not in the running state because some other process is already running. It is waiting for its turn to go to the running state.
- Running State: The process which is currently running and has control of the CPU is known as the process in running state. In single user system, there is only one process which is in the running state. In multiuser system, there are multiple processes which are in the running state.
- Blocked State: The process is currently waiting on external event such as an I/O operation is said to be in blocked state. After the completion of I/O operation, the process from blocked state enters in the ready state and from the ready state when the process turn will come it will again go to running state.
- Terminated / Halted State: The process whose operation is completed, it will go the terminated state from the running state. In halted state, the memory occupied by the process is released.
New: Process Creation.
Running: Instruction Execution.
Waiting: Process waiting for some event to occur.
Ready: The process is waiting to be assigned to be assigned to a processor.
Terminated: Process execution is over.
3.1.2 Process Control Block (PCB):
Operating System executes programs as different processes. As processes runs it is necessary to keep certain details related to process. PCB is a record or a data structure that is maintained for each and every process. Every process has one PCB that is associated with it. A PCB is created when a process is created and it is removed from memory when process is terminated. Following figure shows PCB:
PROCESS CONTROL BLOCK
|List Of Open Files|
Fig. Process Control Block
A PCB may contain several types of information depending upon the process to which PCB belongs. The information stored in PCB of any process may vary from process to process. In general, a PCB may contain information regarding:
- Process Number: Each process is identified by its process number, called process identification number (PID). Every process has a unique process-id through which it is identified. The process-id is provided by the OS. The process id of two processes could not be same because processs-id is always unique.
- Priority: Each process is assigned a certain level of priority that corresponds to the relative
Importance of the event that it services Process priority is the preference of the one process over other process for execution. Priority may be given by the user/system manager or it may be given internally by OS. This field stores the priority of a particular process.
- Process State: This information is about the current state of the process. i.e. whether process is in new, ready, running, waiting or terminated state.
- Program Counter: This contains the address of the next instruction which is going to get execute for the same process.
- CPU Registers: CPU registers vary in number and type, depending upon the computer architectures. These include index registers, stack pointers and general purpose registers, etc. When an interrupt occurred, information about the current status of the old process is saved in registers along with the program counters. This information is necessary to allow the process to be continued correctly after the completion of an interrupted process.
- CPU Scheduling Information: This information includes a process priority, pointers to scheduling queues and any other scheduling parameters.
- Memory Management Information: This information may include such information as the value of base and limit registers, the page table or the segment table. These tables are memory management related information. This information varies from one OS to another OS.
- Accounting: This includes actual CPU time used in executing a process in order to charge individual user for processor time, time limit and accounts number, Job or Process Number.
- I/O Status: I/O information include list of different I/O devices allocated to different processes. It includes outstanding I/O request, allocated devices information, pending operation and so on.
File Management: It includes information about all open files, access rights etc.
3.2.1 Process Scheduling:
The main objective of OS is multiprogramming & multitasking. For multiprogramming it is necessary to run multiple programs at all times. This utilizes CPU & I/O device utilization.
The main aim of time sharing system is to keep CPU switching among multiple devices. To meet these both objective of multiprogramming & time sharing. The processes should schedule according to their priority.
The processes are arranged in a queue. Each process is associated with its PCB. The process scheduler selects an available process from set of available queues for execution. For a single processor there will be only one process running at the given time. If there are more processes, they will have to wait till the CPU gets free & can be rescheduled.
184.108.40.206 Scheduling Queues:
As a process created, i.e. when a process enters the system, it is put in to a job queues. So, the Job Queue contains all the processes in the system. The processes that are ready & waiting for the execution are kept in the list called as Ready Queue. The ready queue is implemented as a linked list. The header of ready queue points to the first & last in the ready queue points to last.
Suppose one process from a ready queue is taken for execution by the processor, and if process required some I/O devices such as disk the process request for an I/O devices. But requested disk may be busy with other process, and then the requesting process may have to wait for the disk. So, the system also includes the other queue. Another queue is used for keeping the processes waiting for particular I/O device, known as Device Queue. Each device has its own queue.
Ready Queue: The processes that are residing in main memory and are ready and waiting to execute are kept on a list called the ready queue.
Job Queue: It is a collection or set of processes in the system present at a particular moment.
Device Queue: It is a set or collection of processes waiting for Input/Output devices.
Throughout the lifetime of process it travels to the various scheduling queues. An OS should select processes for scheduling purposes from these queues in some manner or style. The selection procedure is done by Schedulers.
There are three different levels at which the OS can schedule process:
- Long Term Scheduling: (Job Queue)
If the number of ready processes in ready queue increases, the overhead on the OS for maintaining long list, context switching & dispatch increases. Therefore, only the limited processes allowed in the ready queue to compete for the CPU. This is managing by the Long Term Schedulers.
- Short Term Scheduling: (Ready Queue)
There is another scheduler which select the job or processes which are ready to execute from this pool & allocates the CPU to one of them is called Short Term Scheduler or CPU Scheduler. Or It decides which of the ready process is to be scheduled or dispatched next. It is also called as Dispatchers.
- Medium Terms Scheduling:
The main memory of the computer system is limited & it can hold only a certain number of processes. If sufficient memory for process execution is not available, the process gets blocked & it is swapped out on the disk. When some memory is freed, the OS checks the swapped but ready processes. Depending upon priority of memory resource requirement, the OS decides which process OS to be swapped in. After swapping it in, the OS links the PCB to chain of ready processes for dispatching. This is the function of Medium Term Schedulers.
Following Figure shows the types of Schedulers:
|Short term scheduler||Long term scheduler|
|1) Short term scheduler is known as CPU scheduler.||1) Long-term scheduler is known as job scheduler.|
|2) Short-term scheduler selects which process should be executed next and allocates CPU.||2) Long-term scheduler selects which processes should be brought into the ready queue.|
|3) Short-term scheduler is invoked very frequently (milliseconds).||3) Long-term scheduler is invoked very infrequently (seconds, minutes).|
|4) Short term scheduler is fast.||4) Long term scheduler is relatively slow.|
- Context Switching:
When we can run processes at the same time & when process1 waits for an I/O, process 2 executes & when process 2 waits for an I/O, process 1 executes. Some time is required for turning CPU’s attention from process 1to process 2 is called as context switching.
After the context switching, the old program will remain in the main memory. The status of current process must be stored in a specific memory area is used by OS which is maintained for each process. This area is called as register save area which is a part of PCB.
When a process issues an I/O system calls, the OS keeps this process away & starts executing another process. But, before staring the execution of another process, the status of the original process is saved in its register save area. When the original process completes its I/O operation, it can be executed again. But at this time CPU may be busy with other process execution. So, before resuming the execution of original process, the context of the currently running process must be saved in its register save area & the CPU register have to be loaded with the saved values from its register save area of the original process to be executed next.
- Operation On Process:
- Process Creation: In general-purpose systems, some way is needed to create processes as needed during operation. There are four principal events led to processes creation.
- System initialization.
- Execution of a process, Creation System calls by a running process.
- A user request to create a new process.
- Initialization of a batch job.
i.e. when OS is booted, several processes are created. Some of these Foreground processes interact with users. Background processes that stay in background sleeping but suddenly springing to life to handle activity such as email, webpage, printing, and so on. i.e. which cannot interact with particular users, but complete some specific task . A process may create a new process by creating some new process such as ‘fork’. Creating process is called parent process and the created one is called the child processes. After the fork, the two processes, the parent and the child, have the same memory image, the same environment strings and the same open files. After a new process is created, two possibilities exist in terms of execution as follows:
- The parent continues to execute concurrently with its children.
- The parent waits until some or all of its children have terminated.
- Process Termination: A process terminates when it finishes executing its last statement. Its resources are returned to the system, it is purged from any system lists or tables, and its process control block (PCB) is erased i.e., the PCB’s memory space is returned to a free memory pool. The new process terminates the existing process, usually due to following reasons:
- Normal Exist: Most processes terminates because they have done their job. This call is exist in UNIX.
- Error Exist: When process discovers a fatal error. For example, a user tries to compile a program that does not exist.
- Fatal Error: An error caused by process due to a bug in program for example, executing an illegal instruction, referring non-existing memory or dividing by zero.
- Killed by another Process: A process executes a system call telling the Operating Systems to terminate some other process.
Inter Process Communication:
Processes executing concurrently in the system may be independent or may be cooperating process Cooperating processes. Independent processes are those, which cannot get affect other processes or may not get affected by other process executing in the system. A process which does not share anything with other process is also called as independent process.
On the other hand the processes which are cooperating with each other or which get affect due to other executing processes & affect the other processes are called as cooperating processes.
Need Of IPC:
- Information Sharing
- Computation Speedup
- Shared Memory: IPC using shared memory requires a region of shared memory among the communicating processes. In shared memory model, sharing region is established. From this sharing region cooperating processes exchange data or information. They can read & write data from & to this region. i.e. All the processes can exchange information by reading and/or writing data in shared memory segment. The form of data and location are determined by these processes who want to communicate with each other. Shared memory is useful for maximum speed & convenience of communication as it can be done at memory speeds when within computer. This model is faster than message passing model bcoz, in message passing model use system call; it is time consuming & needs more intervention of kernel. Shared memory is fast than message passing but it required shared memory & for shared memory system calls, is required only to establish shared memory regions.
- Message Passing: In this model, communication takes place by exchanging messages between cooperating processes. It allows processes to communicate and synchronize their action without sharing the same address space. It is particularly useful in a distributed environment when communication process may reside on a different computer connected by a network. Communication requires sending and receiving messages through the kernel. The processes that want to communicate with each other must have a communication link between them. Message passing models are easy for exchanging small amount of data, as there are no much conflict. Message passing phenomenon is easier than shared memory model for inter computer communication.
- Shared Memory System Techniques:
In shared memory system two techniques is used. These are;
- The unbounded buffer does not place any limit on the size of the buffer. When unbounded buffer is used, consumer has to wait for new item to be placed but producer can produce items without waiting.
- The bounded buffer assumes a fixed buffer size. So in this type consumer should wait if the buffer is empty & producer has to wait if the buffer is full.
Message Passing System Techniques:
In message passing system following techniques is used; these are;
- Naming: If process P & Q want to communicate, they must send message to & receive message from each other, a logical communication link must exist between them & they should know the name of communicating processes for identification. There are two types for implementing the logical communication:
- In Direct Communication, each processes that want to communicate must be explicitly use the name for the sender as well as receiver or recipient while communication. In this type the send ( ) & receive ( ) primitives are defined as follows:
Send (P, message): Send message to process P.
Receive (Q, message): Receive a message from Process Q.
- In Indirect Communication, the message could be sent or received from mailbox or ports. A mailbox can be viewed as an object in which message can kept or even removed. Each mailbox is associated with unique number. Number of different mailboxes can be used for communication between any two processes. The processes can communicate only if the processes have the shared mailbox.
Send (A, message): Send a message to mailbox A.
Receive (A, message): Receive message from mailbox.
- Buffering: The communication could be direct or indirect. The messages exchanged by the communicating processes are stored in a temporary queue. There are three ways to implement this queue:
- Zero Capacity: The maximum length of the queue is zero, i.e. the link does not have any message waiting in it. The sender must wait until the recipient receives the message. It is sometimes called as message system without buffer.
- Bounded Capacity: Bounded capacity means it has fixed length or fixed size. If the size of queue is ‘n’ then the number of message can reside in this queue are ‘n’. If the queue is not full a new message can be send to it & sender can continue its execution without waiting. It the queue is full the sender must wait till the space gets available in queue.
- Unbounded Capacity: In Unbounded Capacity the size of the queue is infinite. So, any number of messages can reside on the queue. In this sender never waits.
Critical Section Problem:
Race Condition occurs when two or more than two processes try to read or write the shared data & the finale result is dependent on process that runs precisely. This condition occurs because of sharing file or memory variable. To avoid this condition we have to ensure that if one process is using some files or variable the other process should not be allowed to share it.
The Mutual Exclusion is a way to ensure that if one process is trying to use some shared files & variable then other process should be excluded or should not be allowed to do the same operation. Sometimes it may be happen that process accesses shared memory or file, or may be doing some other critical things which may invite race condition. That part of program where the shared memory is accessed is known as Critical Section. To avoid the race condition, we have to see it that no two processes will be there at a time in the critical section.
To find out the solution on race condition, only mutual exclusion is not sufficient. Following are the four conditions:
- Two processes should not be present in their critical region at a time.
- Assumption regarding parameters like speeds or numbers CPU’s should not be made.
- The process that is present outside critical section cannot block other processes.
- While process is trying to make an entry in its critical section, it should not face the problem like wait forever.
the critical region, there are two processes A & B. T1, T2, T3, T4 are the time interval. Process A enters in its critical section at interval T1. After some time, Process B tries to enter in its critical section but Process A still is in critical section between T2 & T3 so Process B will be blocked.
Once the Process A moves out from critical section, Process B will enter in the critical section. It will remain there in the time interval T3 & T4. At 4 it will leave the CS.
A thread, sometimes called a lightweight process, is a basic unit of CPU utilization. A traditional (or heavyweight) process has a single thread of control. If a process has multiple threads of control, it can do more than one task at a time. Thread is associated with thread ID, a program counter, a register set & a stack. Thread can shares its code section, data section & other OS resources such as open files & signals with other threads which belong to the same type of process.
Modern OS are multithreaded OS that makes them very powerful. In many application multiple activities are performed at once Some of these are blocks time to time such type of application can be decomposed in to mini process is known as threads.
Threads have lighter weight than process & they are easier & faster to create & destroy than processes. In many systems, thread creation is 10-1000 times faster than process creation. This property is use full when we want to apply changes to the number of threads dynamically & rapidly.
Eg. 1) The Word Processor is created as a two threaded program. One thread is used to interact the user through k/b & mouse & other thread handles reformatting of the file in background. When the sentence is deleted from page1, the command for deletion is accepted by first thread i.e. interactive thread & is completely handled by another thread i.e. Reformatting Thread. During this period reformatting thread handles the command & interactive thread continue with the k/b & mouse. 2) Web browser has many threads. One thread may display images, some may display text, some may be other links like data from network, etc.
- Benefits Of Multithreading Programming:
- Responsiveness: Multithreading is an interactive application may allow a program to continue running even if part of it is blocked or is performing a lengthy operation, thereby increasing responsiveness to the user.
For example: a multithreaded web browser could still allow user interaction in one thread while an image is being loaded in another thread.
- Resource sharing: By default, threads share the memory and the resources of the process to which they belong. The benefit of code sharing is that it allows an application to have several different threads of activity all within the same address space. A word processor with three threads.
For example: a multithreaded word processor allows all threads to have access to the document being edited.
- Economy: Because threads share resources of the process to which they belong, it is more economical to create and switch threads, than create and context switch processes (it is much more time consuming). For example: in Sun OS Solaris 2 creating a process is about 30 times slower than is creating a thread (context switching is about five times slower than threads switching).
- Utilization Of Multiprocessor Architectures: The benefits of multithreading can be greatly increased in a multiprocessor architecture (or even in a single-CPU architecture), where each thread may be running in parallel on a different processor. A single threaded process can run on one CPU though many processes are available. Multithreading is very useful on multi CPU machine to increase concurrency.
User Thread & Kernel Thread:
Threads can be implemented at two different levels:
- User Level
- Kernel Level
The Threads implemented at the user level are known as User Threads. In user level thread management is done by the application. The kernel is not aware of existence of threads. The thread library contains code for creating & destroying threads, for passing message & data between threads, for scheduling thread execution & for saving & restoring thread context.
- Thread switching does not require kernel mode privileges.
- User level thread can run on any Operating System.
- User Level Threads are easy to create & manage.
- Scheduling can be application specific.
The Threads implemented at kernel level are known as Kernel Threads. They are completely handled by operating system schedulers. Any application can be programmed as a multithreaded application. All of the threads within an application are supported by a single process. The kernel maintains context information for the process as a whole & for individual threads within the process.
- If one thread in a process is blocked, the kernel can schedule another thread of the same process.
- Kernel Threads are generally slower to create & manage than user threads.
- Transfer of control from one thread to another within the same process requires a mode switch to the kernel.
Refers to the ability of an O.S it support multiple threads of execution within a single process. In a multi-threaded environment multiple processes and multiple threads can be considered as in case of multiuser OS such as UNIX. The kernel thread cannot be controlled by the application programmer. A thread library is provided with the application program interface to handle the user threads. System provides support to both user and kernel threads, resulting in different types of multithreading models:
- Many to One Model:
Many user-level threads mapped with single kernel thread, is called as Many to One Model. The thread library provided in the user space provides very efficient thread management. The thread management is done by thread library in user space. The user thread are not directly visible to the kernel thread & do not required any kernel support.
So, only one user thread has access to the kernel at a time & if that thread blocks, then the entire process gets blocked.
Advantages: 1) Use of Library. 2) Efficient system in terms of performance. 3) One kernel thread controls multiple user threads.
Disadvantages: 1) Cannot run multiple user threads parallel.
- One to One Model:
In this model, each user-level thread maps with each kernel thread. Thus the kernel provides full support. The thread management provided by kernel is slower as compare to thread management provided by the user library. Kernel provides thread scheduling, so when one thread blocks, other can still run. The application programmer should be careful when creating a user thread bcoz, creation of user thread requires creation of kernel thread.
Advantages: 1) Multiple threads can run parallel. 2) Less complication in the processing.
Disadvantages: 1) Every time with user thread, kernel thread is created. 2) It reduces the performance of system.
3) Many to Many Models:
Allows many user level threads are to be mapped with many or lesser number kernel threads. This is depending on the application or even on a particular machine. This model provides very fast & efficient thread management which gives better application performance & system throughputs. In this model the programmers need not to worry about the number of threads is to be created. This model is complex to implement.
Advantages: 1) Many threads can be created as per users requirement. 2) Multiple kernel or equal to user threads can be created.
Disadvantages: 1) True concurrency cannot be achieved. 2) Multiple thread of kernel is an overhead for Operating System.