Question: What are the 5 operating systems?
Answer:-
The 5 most common operating system include:- Apple macOS, Apple iOS, Microsoft Windows, Google Android and Linux.
Find the most asked operating system interview questions and answers on this page. Prepare for your interview with PrepInsta.
Page Highlights:
Operating Systems is the interface between the computer hardware and the user. OS carries out tasks implemented by end users or supporting tasks to provide functionality to end user with the help of computation power and support provided by hardware of the system.
Popular Operating Systems include:- Windows, Mac, Linux, Android
The operating system is a software program that facilitates computer hardware to communicate and operate with the software applications and it acts as an interface between the user and the computer hardware. It is the most important part of a computer system without it computer is just like a box.
Deadlock is a situation when two or more processes wait for each other to finish and none of them ever finish. Consider an example when two trains are coming toward each other on same track and there is only one track, none of the trains can move once they are in front of each other. Similar situation occurs in operating systems when there are two or more processes hold some resources and wait for resources held by other(s).
In a Time-sharing system, the CPU executes multiple jobs by switching among them, also known as multitasking. This process happens so fast that users can interact with each program while it is running.
Throughput – number of processes that complete their execution per time unit. Turnaround time – amount of time to execute a particular process. Waiting time – amount of time a process has been waiting in the ready queue. Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment).
The operating system controls and coordinates the use of hardware among the different processes and applications. It provides the various functionalities to the users. The following are the main job of operating system.
– Resource utilization
– Resource allocation
– Process management
– Memory management
– File management
– I/O management
– Device management
Real-time systems are used when rigid time requirements have been placed on the operation of a processor. It has well defined and fixed time constraints.
It is a useful, memory-saving technique for multiprogrammed timesharing systems. A Reentrant Procedure is one in which multiple users can share a single copy of a program during the same period. Reentrancy has 2 key aspects: The program code cannot modify itself, and the local data for each user process must be stored separately. Thus, the permanent part is the code, and the temporary part is the pointer back to the calling program and local variables used by that program. Each execution instance is called activation. It executes the code in the permanent part, but has its own copy of local variables/parameters. The temporary part associated with each activation is the activation record. Generally, the activation record is kept on the stack.
Note: A reentrant procedure can be interrupted and called by an interrupting program, and still execute correctly on returning to the procedure.
A multiprogramming operating system is one that allows a process where multiple programs execute over one CPU. If your interviewer asks OS questions on multiprogramming, you can highlight key differences between a multiprogramming OS and other systems.
One way to display that you understand the benefits of multi-programming is to use a real-life example. By offering an instance when you used multi-programming to receive such benefits, you are displaying hands-on knowledge of a system that might be important to the interviewer.
Virtual memory is a memory management method that helps to execute the process using the primary and secondary memory. Though the program gets executed using the main memory, the resources and pages load from the secondary memory.
Thrashing is a situation when the performance of a computer degrades or collapses. Thrashing occurs when a system spends more time processing page faults than executing transactions. While processing page faults is necessary to in order to appreciate the benefits of virtual memory, thrashing has a negative affect on the system. As the page fault rate increases, more transactions need processing from the paging device. The queue at the paging device increases, resulting in increased service time for a page fault.
Micro kernel: micro kernel is the kernel which runs minimal performance affecting services for operating system. In micro kernel operating system all other operations are performed by processor.
Macro Kernel: Macro Kernel is a combination of micro and monolithic kernel.
RAID 0 – Non-redundant striping
RAID 1 – Mirrored Disks
RAID 2 – Memory-style error-correcting codes
RAID 3 – Bit-interleaved Parity
RAID 4 – Block-interleaved Parity
RAID 5 – Block-interleaved distributed Parity
RAID 6 – P+Q Redundancy
Server systems can be classified as either computer-server systems or file server systems. In the first case, an interface is made available for clients to send requests to perform an action. In the second case, provisions are available for clients to create, access and update files.
The word “boot” is short for “bootstrap,” which is the name of the program that prompts the operating system at startup. Booting occurs when you start a computer from the kernel. This usually happens when you start it for the first time. It may also occur when the computer malfunctions and you have to put it in safe mode or reboot it as though it were a new CPU.
Booting an operating system is an essential function that applies to many varied work environments. If you have a workplace with computers, it’s highly likely you will have to boot new and existing computers as an IT professional. The answer to this question offers the interviewer a read on your fundamental skills with regards to operating systems.
Demanding pages is a concept used by the virtual machine. Only a part of the process needs to be present in the main memory to execute some process, which means that only a few pages will only be present in the main memory at any time, and rest will be kept in the secondary memory.
Banker’s algorithm is used to avoid deadlock. It is the one of deadlock-avoidance method. It is named as Banker’s algorithm on the banking system where bank never allocates available cash in such a manner that it can no longer satisfy the requirements of all of its customers.
With an increased number of processors, there is a considerable increase in throughput. It can also save more money because they can share resources. Finally, overall reliability is increased as well.
A kernel is the core of every operating system. It connects applications to the actual processing of data. It also manages all communications between software and hardware components to ensure usability and reliability.
Real-time systems are used when rigid time requirements have been placed on the operation of a processor. It has well defined and fixed time constraints.
An executing program is known as a process. There are two types of processes:
In a Time-sharing system, the CPU executes multiple jobs by switching among them, also known as multitasking. This process happens so fast that users can interact with each program while it is running.
Program | Process |
---|---|
Program is a set of instructions written to complete a task. | Process is a program in execution. |
Program is a passive/static entity. | Process is an active/dynamic entity. |
Program resides in secondary memory. | A process in execution resides in Primary Memory. |
Program has a longer life span. | A process has a limited life span. |
A program only requires memory space to store itself. | Process needs execution time in CPU, I/O requirements, shared resources, files, memory addresses and more. |
It has no significant overhead. | Has a significant overhead. |
Program | Process |
---|---|
Program is a set of instructions written to complete a task. | Process is a program in execution. |
Program is a passive/static entity. | Process is an active/dynamic entity. |
Program resides in secondary memory. | A process in execution resides in Primary Memory. |
Program has a longer life span. | A process has a limited life span. |
A program only requires memory space to store itself. | Process needs execution time in CPU, I/O requirements, shared resources, files, memory addresses and more. |
It has no significant overhead. | Has a significant overhead. |
Pre-Emptive Scheduling | Non Pre-Emptive Scheduling |
---|---|
CPU allocation is for a limited time. | CPU allocation until the process is complete. |
Execution of the process is interrupted in the middle. | Execution of the process remains uninterrupted until it is completed. |
The concept bears an overhead of switching between the tasks. | No such overhead of switching between the tasks. |
If the CPU receives continuous high priority tasks, a process may remain in the waiting state indefinitely. | If the CPU is processing a program with the largest burst time, even a program with the smallest burst time may have to starve. |
It allows flexibility to the processes which are in the waiting state allowing the high priority tasks to be executed first. | This approach is also known as the rigid scheduling as it offers no flexibility to the processes irrespective of their urgency for execution. |
Pre emptive scheduling needs to maintain the integrity of the shared data and to ensure no data loss occurs when processes are swapped from the waiting state to the ready state. | The Non- pre-emptive Scheduling does not require to maintain data integrity as no processes are swapped from the waiting state to the ready state. |
Pre-Emptive Scheduling | Non Pre-Emptive Scheduling |
---|---|
CPU allocation is for a limited time. | CPU allocation until the process is complete. |
Execution of the process is interrupted in the middle. | Execution of the process remains uninterrupted until it is completed. |
The concept bears an overhead of switching between the tasks. | No such overhead of switching between the tasks. |
If the CPU receives continuous high priority tasks, a process may remain in the waiting state indefinitely. | If the CPU is processing a program with the largest burst time, even a program with the smallest burst time may have to starve. |
It allows flexibility to the processes which are in the waiting state allowing the high priority tasks to be executed first. | This approach is also known as the rigid scheduling as it offers no flexibility to the processes irrespective of their urgency for execution. |
Pre emptive scheduling needs to maintain the integrity of the shared data and to ensure no data loss occurs when processes are swapped from the waiting state to the ready state. | The Non- pre-emptive Scheduling does not require to maintain data integrity as no processes are swapped from the waiting state to the ready state. |
A dead lock avoidance algorithm dynamically examines the resource-allocation state to ensure that a circular wait condition can never exist. The resource allocation state is defined by the number of available and allocated resources, and the maximum demand of the process. There are two algorithms:
Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process:
Selects from among the processes in memory that are ready to execute, and allocates the CPU to one of them. CPU scheduling decisions may take place when a process:
There are 4 necessary conditions to achieve a deadlock:
There are 4 necessary conditions to achieve a deadlock:
There are two types of fragmentation:
Hardware device that maps virtual to physical address. In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory. The user program deals with logical addresses; it never sees the real physical addresses
Hardware device that maps virtual to physical address. In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory. The user program deals with logical addresses; it never sees the real physical addresses
This is the advanced OS Interview Questions which is asked in an interview. The different types of scheduling algorithms are as follows:
This is the advanced OS Interview Questions which is asked in an interview. The different types of scheduling algorithms are as follows:
User-Level Threads | Multi-Threaded Model |
---|---|
User Threads are implemented by users. | Kernel Threads are implemented by OS. |
OS does not recognize user level threads. | Kernel threads are recognized by OS. |
Implemetation is easy. | Implementation is complicated. |
Context switch time is less. | Context switch time is more. |
Context switch-no hardware support. | Hardware support is needed. |
If one user level thread perform blocking operation then entire process will be blocked. | If one kernel thread perform blocking operation then another thread can continue execution. |
User-Level Threads | Multi-Threaded Model |
---|---|
User Threads are implemented by users. | Kernel Threads are implemented by OS. |
OS does not recognize user level threads. | Kernel threads are recognized by OS. |
Implemetation is easy. | Implementation is complicated. |
Context switch time is less. | Context switch time is more. |
Context switch-no hardware support. | Hardware support is needed. |
If one user level thread perform blocking operation then entire process will be blocked. | If one kernel thread perform blocking operation then another thread can continue execution. |
When several threads (or processes) share data, running in parallel on different cores, then changes made by one process may override changes made by another process running parallel. Resulting in inconsistent data. So, this requires processes to be synchronized, handling system resources and processes to avoid such situation is known as Process Synchronization.
Different synchronization mechanisms are:
When several threads (or processes) share data, running in parallel on different cores, then changes made by one process may override changes made by another process running parallel. Resulting in inconsistent data. So, this requires processes to be synchronized, handling system resources and processes to avoid such situation is known as Process Synchronization. Different synchronization mechanisms are:
Paging is a memory-management scheme that permits the physical address space of a process to be non contiguous or in other words eliminates the need for contiguous allocation of physical memory.
Paging is a memory-management scheme that permits the physical address space of a process to be non contiguous or in other words eliminates the need for contiguous allocation of physical memory.
A zombie process is a process that has completed and in the terminated state but has its entry in the process table. It shows that the resources are held by the process and are not free.
A zombie process is a process that has completed and in the terminated state but has its entry in the process table. It shows that the resources are held by the process and are not free.
In the Virtual memory system, all the processes are divided into fixed-sized pages. These pages are loaded into the physical memory using the method of demand paging. Under demand paging, during the execution of a particular process, whenever a page is required, a page fault occurs, and then the required page gets loaded into the memory replacing some other page. The page replacement algorithm specifies the choice of the page which is to be replaced. Now, Belady’s Anomaly is said to occur when the number of page faults increases significantly.
In the Virtual memory system, all the processes are divided into fixed-sized pages. These pages are loaded into the physical memory using the method of demand paging. Under demand paging, during the execution of a particular process, whenever a page is required, a page fault occurs, and then the required page gets loaded into the memory replacing some other page. The page replacement algorithm specifies the choice of the page which is to be replaced. Now, Belady’s Anomaly is said to occur when the number of page faults increases significantly.
Paging | Segmentation |
---|---|
A page is a physical unit of information. | A segment is a logical unit of information. |
Frames on main memory are required. | No frames are required. |
No frames are required. | The page is of the variable block size |
It leads to internal fragmentation | It leads to external fragmentation |
The page size is decided by hardware in paging | Segment size is decided by the user in segmentation |
It does not allow logical partitioning and protection of application components | It allows logical partitioning and protection of application components |
Paging involves a page table that contains the base address of each page | Segmentation involves the segment table that contains the segment number and offset |
Paging | Segmentation |
---|---|
A page is a physical unit of information. | A segment is a logical unit of information. |
Frames on main memory are required. | No frames are required. |
No frames are required. | The page is of the variable block size |
It leads to internal fragmentation | It leads to external fragmentation |
The page size is decided by hardware in paging | Segment size is decided by the user in segmentation |
It does not allow logical partitioning and protection of application components | It allows logical partitioning and protection of application components |
Paging involves a page table that contains the base address of each page | Segmentation involves the segment table that contains the segment number and offset |
Kernels are basically of two types:
Kernels are basically of two types:
If the CPU gets the processes of the higher burst time at the front end of the ready queue then the processes of lower burst time may get blocked which means they may never get the CPU if the job in the execution has a very high burst time. This is called convoy effect or starvation.
If the CPU gets the processes of the higher burst time at the front end of the ready queue then the processes of lower burst time may get blocked which means they may never get the CPU if the job in the execution has a very high burst time. This is called convoy effect or starvation.
A state is safe if the system can allocate all resources requested by all processes ( up to their stated maximums ) without entering a deadlock state. System is in safe state if there exists a safe sequence of all processes. Deadlock Avoidance: Ensure that a system will never enter an unsafe state.
A state is safe if the system can allocate all resources requested by all processes ( up to their stated maximums ) without entering a deadlock state. System is in safe state if there exists a safe sequence of all processes. Deadlock Avoidance: Ensure that a system will never enter an unsafe state.
The basic difference b/w the turn around time and response time is:-
The basic difference b/w the turn around time and response time is:-,/p.
Given below are the scheduling queues:-
Given below are the scheduling queues:-
Login/Signup to comment