these are some notes that provide you the knowledge of the Operating systems like memory management, scheduling, scheduling criteris etc.
Shared Memory System
A shared memory system is an Inter-Process Communication (IPC) mechanism that allows multiple processes to access a common area of memory. This enables processes to share data directly by reading from and writing to the same memory region, which provides faster communication compared to other IPC methods like pipes or message queues.
Key Points of Shared Memory System:
- Common Memory Region: A specific memory segment is created that multiple processes can access.
- Fast Communication: Since data is shared via memory, there’s no need to pass messages or data through intermediaries.
- Synchronization Needed: Processes must synchronize access to avoid conflicts, using mechanisms like semaphores or mutexes.
- Creation and Access:
- In UNIX-like systems,
shmget()
creates a shared memory segment, andshmat()
attaches it to the process’s address space.
- In UNIX-like systems,
Message Passing System
A message passing system is an Inter-Process Communication (IPC) mechanism where processes communicate by sending and receiving messages. Unlike shared memory, where data is accessed directly, message passing involves explicit messages for communication between processes.
Key Points of Message Passing:
- Explicit Communication: Processes exchange messages, either synchronously (blocking) or asynchronously (non-blocking).
- No Shared Memory: Each process has its own memory space; data is passed through messages instead of a common memory area.
- Message Types: Messages can be of different types, such as fixed or variable-length.
- Operations:
- Send: A process sends a message to another process.
- Receive: A process receives a message from another process.
- Mechanisms:
- Message Queues: Messages are stored in queues until the receiving process retrieves them.
- Sockets: Used for communication over a network.
- Pipes: Used for one-way communication between parent and child processes.
CPU Scheduler
A CPU scheduler is a component of the operating system that manages the execution of processes on the CPU. It determines which process gets to run at any given time, based on a set of scheduling algorithms.
Key Points of CPU Scheduler:
- Process Selection: Chooses which process from the ready queue will execute next.
- Context Switching: When switching between processes, the CPU scheduler handles saving and loading process states.
- Scheduling Algorithms:
- First-Come, First-Served (FCFS): Processes are scheduled in the order they arrive.
- Shortest Job Next (SJN): The process with the shortest CPU burst time is chosen.
- Round-Robin (RR): Each process is given a fixed time slice in turn.
- Priority Scheduling: Processes are scheduled based on priority.
- Multilevel Queue: Different queues for different types of processes, each with its own scheduling algorithm.
- Goals:
- Maximize CPU utilization.
- Minimize waiting and response times.
- Ensure fairness among processes.
Scheduling Criteria
Scheduling criteria are the factors used to evaluate the performance of a CPU scheduling algorithm. These criteria help determine how effectively the scheduler allocates CPU time to processes.
Key Scheduling Criteria:
- CPU Utilization:
- Maximizes the percentage of time the CPU is actively working, aiming for as close to 100% as possible.
- Throughput:
- The number of processes completed per unit of time. Higher throughput is better.
- Turnaround Time:
- The total time taken from process submission to completion. Lower turnaround times are preferred.
- Waiting Time:
- The amount of time a process spends in the ready queue waiting for CPU access. Minimizing waiting time improves efficiency.
- Response Time:
- The time from when a request is submitted until the first response is produced. Important for interactive systems.
- Fairness:
- Ensures that each process gets a fair share of CPU time without being starved or ignored for long periods.
Scheduling Algorithms
Scheduling algorithms determine how the CPU allocates its time among processes in the ready queue. Different algorithms are used based on the system's needs for performance, fairness, or responsiveness.
Key Scheduling Algorithms:
- First-Come, First-Served (FCFS):
- Processes are executed in the order they arrive.
- Simple but can cause long wait times (e.g., the "convoy effect").
- Shortest Job Next (SJN):
- The process with the shortest CPU burst is selected next.
- Minimizes waiting time but can lead to process starvation.
- Round-Robin (RR):
- Each process is given a fixed time slice (quantum) and cycles through the ready queue.
- Ideal for time-sharing systems, ensuring responsiveness.
- Priority Scheduling:
- Each process is assigned a priority, and the process with the highest priority is scheduled next.
- Can cause low-priority processes to starve unless measures like aging are implemented.
- Multilevel Queue Scheduling:
- Processes are divided into multiple queues based on priority or process type, each with its own scheduling algorithm.
- Suitable for systems with different classes of processes (e.g., interactive vs. batch).
- Multilevel Feedback Queue:
- Allows processes to move between queues based on their behavior (e.g., CPU-bound or I/O-bound).
- Dynamically adjusts priorities to improve fairness and responsiveness.
Preemptive and Non-Preemptive Scheduling
Preemptive and non-preemptive refer to two types of CPU scheduling approaches:
Preemptive Scheduling:
- The CPU can be taken away from a running process if a higher-priority process arrives or if the current process exceeds its time slice.
- Allows for better responsiveness and fairness.
- Examples: Round-Robin, Priority Scheduling (Preemptive), Shortest Remaining Time First (SRTF).
Non-Preemptive Scheduling:
- Once a process starts execution, it runs until completion or voluntarily gives up the CPU (e.g., through I/O operations).
- Simpler but can lead to long waiting times for other processes.
- Examples: First-Come, First-Served (FCFS), Shortest Job Next (SJN), Priority Scheduling (Non-Preemptive).
Key Difference:
- Preemptive: Allows interruption of processes.
- Non-Preemptive: Processes run to completion without interruption.