11. What is multi tasking, multi programming, multi threading?
Multi programming: Multiprogramming is the technique of running several programs
at a time using timesharing. It allows a computer to do several things at the same
time. Multiprogramming creates logical parallelism. The concept of
multiprogramming is that the operating system keeps several jobs in memory
simultaneously. The operating system selects a job from the job pool and starts
executing a job, when that job needs to wait for any i/o operations the CPU is
switched to another job. So the main idea here is that the CPU is never idle. Multi
tasking: Multitasking is the logical extension of multiprogramming .The concept of
multitasking is quite similar to multiprogramming but difference is that the switching
between jobs occurs so frequently that the users can interact with each program while
it is running. This concept is also known as time-sharing systems. A time-shared
operating system uses CPU scheduling and multiprogramming to provide each user
with a small portion of time-shared system. Multi threading: An application typically
is implemented as a separate process with several threads of control. In some
situations a single application may be required to perform several similar tasks for
example a web server accepts client requests for web pages, images, sound, and so
forth. A busy web server may have several of clients concurrently accessing it. If the web
Multi programming: Multiprogramming is the technique of running several programs
at a time using timesharing. It allows a computer to do several things at the same
time. Multiprogramming creates logical parallelism. The concept of
multiprogramming is that the operating system keeps several jobs in memory
simultaneously. The operating system selects a job from the job pool and starts
executing a job, when that job needs to wait for any i/o operations the CPU is
switched to another job. So the main idea here is that the CPU is never idle. Multi
tasking: Multitasking is the logical extension of multiprogramming .The concept of
multitasking is quite similar to multiprogramming but difference is that the switching
between jobs occurs so frequently that the users can interact with each program while
it is running. This concept is also known as time-sharing systems. A time-shared
operating system uses CPU scheduling and multiprogramming to provide each user
with a small portion of time-shared system. Multi threading: An application typically
is implemented as a separate process with several threads of control. In some
situations a single application may be required to perform several similar tasks for
example a web server accepts client requests for web pages, images, sound, and so
forth. A busy web server may have several of clients concurrently accessing it. If the web
server ran as a traditional single-threaded process, it would be able to service only one client at a time. The amount of time that a client might have to wait for its
request to be serviced could be enormous. So it is efficient to have one process that
contains multiple threads to serve the same purpose. This approach would multithread
the web-server process, the server would create a separate thread that would listen for
client requests when a request was made rather than creating another process it would
create another thread to service the request. To get the advantages like
responsiveness, Resource sharing economy and utilization of multiprocessor
architectures multithreading concept can be used.
12. What is hard disk and what is its purpose?
Hard disk is the secondary storage device, which holds the data in bulk, and it holds the data on
the magnetic medium of the disk.Hard disks have a hard platter that holds the magnetic medium,
the magnetic medium can be easily erased and rewritten, and a typical desktop machine will have
a hard disk with a capacity of between 10 and 40 gigabytes. Data is stored onto the disk in the
form of files.
13. What is fragmentation? Different types of fragmentation?
Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are
too small to satisfy any request. External Fragmentation: External Fragmentation happens when
a dynamic memory allocation algorithm allocates some memory and a small piece is left over
that cannot be effectively used. If too much external fragmentation occurs, the amount of usable
memory is drastically reduced. Total memory space exists to satisfy a request, but it is not
contiguous. Internal Fragmentation: Internal fragmentation is the space wasted inside of
allocated memory blocks because of restriction on the allowed sizes of allocated blocks.
Allocated memory may be slightly larger than requested memory; this size difference is memory
internal to a partition, but not being used
14. What is DRAM? In which form does it store data?
DRAM is not the best, but it’s cheap, does the job, and is available almost everywhere you look.
DRAM data resides in a cell made of a capacitor and a transistor. The capacitor tends to lose data
unless it’s recharged every couple of milliseconds, and this recharging tends to slow down the
performance of DRAM compared to speedier RAM types.
15. What is Dispatcher?
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler;
this involves: Switching context, Switching to user mode, Jumping to the proper location in the
user program to restart that program, dispatch latency time it takes for the dispatcher
to stop one process and start another running.
16. What is CPU Scheduler?
Selects from among the processes in memory that are ready to execute, and allocates the CPU to
one of them. CPU scheduling decisions may take place when a process: 1.Switches from running
request to be serviced could be enormous. So it is efficient to have one process that
contains multiple threads to serve the same purpose. This approach would multithread
the web-server process, the server would create a separate thread that would listen for
client requests when a request was made rather than creating another process it would
create another thread to service the request. To get the advantages like
responsiveness, Resource sharing economy and utilization of multiprocessor
architectures multithreading concept can be used.
12. What is hard disk and what is its purpose?
Hard disk is the secondary storage device, which holds the data in bulk, and it holds the data on
the magnetic medium of the disk.Hard disks have a hard platter that holds the magnetic medium,
the magnetic medium can be easily erased and rewritten, and a typical desktop machine will have
a hard disk with a capacity of between 10 and 40 gigabytes. Data is stored onto the disk in the
form of files.
13. What is fragmentation? Different types of fragmentation?
Fragmentation occurs in a dynamic memory allocation system when many of the free blocks are
too small to satisfy any request. External Fragmentation: External Fragmentation happens when
a dynamic memory allocation algorithm allocates some memory and a small piece is left over
that cannot be effectively used. If too much external fragmentation occurs, the amount of usable
memory is drastically reduced. Total memory space exists to satisfy a request, but it is not
contiguous. Internal Fragmentation: Internal fragmentation is the space wasted inside of
allocated memory blocks because of restriction on the allowed sizes of allocated blocks.
Allocated memory may be slightly larger than requested memory; this size difference is memory
internal to a partition, but not being used
14. What is DRAM? In which form does it store data?
DRAM is not the best, but it’s cheap, does the job, and is available almost everywhere you look.
DRAM data resides in a cell made of a capacitor and a transistor. The capacitor tends to lose data
unless it’s recharged every couple of milliseconds, and this recharging tends to slow down the
performance of DRAM compared to speedier RAM types.
15. What is Dispatcher?
Dispatcher module gives control of the CPU to the process selected by the short-term scheduler;
this involves: Switching context, Switching to user mode, Jumping to the proper location in the
user program to restart that program, dispatch latency time it takes for the dispatcher
to stop one process and start another running.
16. What is CPU Scheduler?
Selects from among the processes in memory that are ready to execute, and allocates the CPU to
one of them. CPU scheduling decisions may take place when a process: 1.Switches from running
to waiting state. 2.Switches from running to ready state. 3.Switches from waiting to ready.
4.Terminates. Scheduling under 1 and 4 is non-preemptive. All other scheduling is preemptive.
17. What is Context Switch?
Switching the CPU to another process requires saving the state of the old process and loading the
saved state for the new process. This task is known as a context switch. Context-switch time is
pure overhead, because the system does no useful work while switching. Its speed varies from
machine to machine, depending on the memory speed, the number of registers which must be
copied, the existed of special instructions(such as a single instruction to load or store all
registers).
18. What is cache memory?
Cache memory is random access memory (RAM) that a computer microprocessor can access
more quickly than it can access regular RAM. As the microprocessor processes data, it looks first
in the cache memory and if it finds the data there (from a previous reading of data), it does not
have to do the more time-consuming reading of data from larger memory.
19. What is a Safe State and what is its use in deadlock avoidance?
When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state. System is in safe state if there exists a safe sequence of all
processes. Deadlock Avoidance: ensure that a system will never enter an unsafe state.
20. What is a Real-Time System?
A real time process is a process that must respond to the events within a certain time period. A
real time operating system is an operating system that can run real time processes successfully.
> Read More >
4.Terminates. Scheduling under 1 and 4 is non-preemptive. All other scheduling is preemptive.
17. What is Context Switch?
Switching the CPU to another process requires saving the state of the old process and loading the
saved state for the new process. This task is known as a context switch. Context-switch time is
pure overhead, because the system does no useful work while switching. Its speed varies from
machine to machine, depending on the memory speed, the number of registers which must be
copied, the existed of special instructions(such as a single instruction to load or store all
registers).
18. What is cache memory?
Cache memory is random access memory (RAM) that a computer microprocessor can access
more quickly than it can access regular RAM. As the microprocessor processes data, it looks first
in the cache memory and if it finds the data there (from a previous reading of data), it does not
have to do the more time-consuming reading of data from larger memory.
19. What is a Safe State and what is its use in deadlock avoidance?
When a process requests an available resource, system must decide if immediate allocation
leaves the system in a safe state. System is in safe state if there exists a safe sequence of all
processes. Deadlock Avoidance: ensure that a system will never enter an unsafe state.
20. What is a Real-Time System?
A real time process is a process that must respond to the events within a certain time period. A
real time operating system is an operating system that can run real time processes successfully.
> Read More >
0 comments: