< Back to Part 2 <
21.Explain the difference between microkernel and macro kernel?
Micro-Kernel: A micro-kernel is a minimal operating system that performs only the
essential functions of an operating system. All other operating system functions are performed by
system processes.
Monolithic: A monolithic operating system is one where all operating system code is in a single
executable image and all operating system code runs in system mode.
22.What is multi tasking, multi programming, multi threading?
Multi programming: Multiprogramming is the technique of running several programs at a time
using timesharing.It allows a computer to do several things at the same time. Multiprogramming
creates logical parallelism.
The concept of multiprogramming is that the operating system keeps several jobs in memory
simultaneously. The operating system selects a job from the job pool and starts executing a job,
when that job needs to wait for any i/o operations the CPU is switched to another job. So the
main idea here is that the CPU is never idle.
Multi tasking: Multitasking is the logical extension of multiprogramming .The concept of
multitasking is quite similar to multiprogramming but difference is that the switching between
jobs occurs so frequently that the users can interact with each program while it is running. This
concept is also known as time-sharing systems. A time-shared operating system uses CPU scheduling and multiprogramming to provide each user with a small portion of time-shared system.
Multi threading: An application typically is implemented as a separate process with several
threads of control. In some situations a single application may be required to perform several
similar tasks for example a web server accepts client requests for web pages, images, sound, and
so forth. A busy web server may have several of clients concurrently accessing it. If the web
server ran as a traditional single-threaded process, it would be able to service only one client at a
time. The amount of time that a client might have to wait for its request to be serviced could be
enormous.
So it is efficient to have one process that contains multiple threads to serve the same purpose.
This approach would multithread the web-server process, the server would create a separate
thread that would listen for client requests when a request was made rather than creating another
process it would create another thread to service the request.
So to get the advantages like responsiveness, Resource sharing economy and utilization of
multiprocessor architectures multithreading concept can be used
23. Give a non-computer example of preemptive and non-preemptive scheduling?Ans : Consider any system where people use some kind of resources and compete for them. The
non-computer examples for preemptive scheduling the traffic on the single lane road if there is
emergency or there is an ambulance on the road the other vehicles give path to the vehicles that
are in need. The example for preemptive scheduling is people standing in queue for tickets.
24. What is starvation and aging?Ans :
Starvation: Starvation is a resource management problem where a process does not get the
resources it needs for a long time because the resources are being allocated to other processes.
Aging: Aging is a technique to avoid starvation in a scheduling system. It works by adding an
aging factor to the priority of each request. The aging factor must increase the request’s priority
as time passes and must ensure that a request will eventually be the highest priority request (after
it has waited long enough)
25.Different types of Real-Time Scheduling?Ans :Hard real-time systems – required to complete a critical task within a guaranteed amount of
time.
Soft real-time computing – requires that critical processes receive priority over less fortunate
ones.
26. What are the Methods for Handling Deadlocks?Ans :
->Ensure that the system will never enter a deadlock state.
->Allow the system to enter a deadlock state and then recover.
->Ignore the problem and pretend that deadlocks never occur in the system; used by most
operating systems, including
UNIX.
27. What is a Safe State and its’ use in deadlock avoidance?Ans :When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state
->System is in safe state if there exists a safe sequence of all processes.
->Sequence is safe if for each Pi, the resources that Pi can still request can be satisfied by
currently available resources + resources held by all the Pj, with j
If Pi resource needs are not immediately available, then Pi can wait until all Pj have finished.
When Pj is finished, Pi can obtain needed resources, execute, return allocated resources, and
terminate.
When Pi terminates, Pi+1 can obtain its needed resources, and so on.
->Deadlock Avoidance Þ ensure that a system will never enter an unsafe state.
28. Recovery from Deadlock?Ans :Process Termination:
->Abort all deadlocked processes.
->Abort one process at a time until the deadlock cycle is eliminated.
->In which order should we choose to abort?
Priority of the process.
How long process has computed, and how much longer to completion.
Resources the process has used.
Resources process needs to complete.
How many processes will need to be terminated?
Is process interactive or batch?
Resource Preemption:
->Selecting a victim – minimize cost.
->Rollback – return to some safe state, restart process for that state.
->Starvation – same process may always be picked as victim, include number of rollback in cost
factor.
29.Difference between Logical and Physical Address Space?Ans :
->The concept of a logical address space that is bound to a separate physical address space is
central to proper memory management.
Logical address – generated by the CPU; also referred to as virtual address.
Physical address – address seen by the memory unit.
->Logical and physical addresses are the same in compile-time and load-time address-binding
schemes; logical (virtual) and physical addresses differ in execution-time address-binding
scheme
30. Binding of Instructions and Data to Memory?Ans :Address binding of instructions and data to memory addresses can happen at three different
stages
Compile time: If memory location known a priori, absolute code can be generated; must
recompile code if starting location changes.
Load time: Must generate relocatable code if memory location is not known at compile time.
Execution time: Binding delayed until run time if the process can be moved during its execution
from one memory segment to another. Need hardware support for address maps (e.g., base and
limit registers).
0 comments: