Operating Systems

Refined Comprehensive Study Notes

# Introduction

These notes combine and refine the key ideas repeated across the six slide sets. The material follows a logical progression:

  1. How the computer system works at the hardware level
  2. Where the operating system fits into that system
  3. How the operating system manages resources
  4. How processes are created, scheduled, protected, and terminated
  5. How OS design choices affect performance, safety, and modularity

1. What Is an Operating System?

An operating system (OS) is system software that manages computer hardware and software resources and provides services for application programs. It acts as:

As an intermediary, it allows users and applications to interact with hardware through a controlled interface.
As a resource allocator, it decides how CPU time, memory, storage, and I/O devices are distributed.
As a control program, it supervises execution and prevents improper or unsafe use of system resources.

Main objectives of an OS:

2. Basic Computer System Structure

A computer system is commonly viewed as:

At a lower level, the main hardware elements are:

The CPU executes instructions and uses registers such as:

Instruction Cycle

The processor repeatedly performs:

  • Fetch: retrieve instruction from memory
  • Execute: perform the instruction

This cycle continues until an interrupt or exception changes the flow.

3. Interrupts and I/O

An interrupt is a signal that causes the CPU to suspend its current work and handle an event.

Major interrupt types:

Why interrupts matter:

I/O devices are slower than the CPU. Instead of forcing the processor to constantly check devices, the system lets devices notify the CPU when something important happens.

Interrupt handling flow:

  1. Current process state is saved
  2. Control passes to the interrupt handler
  3. The event is processed
  4. The scheduler or OS decides what runs next

I/O Approaches:

Programmed I/O

  • - CPU repeatedly checks device status
  • - no interrupt used
  • - inefficient

Interrupt-driven I/O

  • - device interrupts CPU when ready
  • - better than programmed I/O
  • - still may involve CPU in many small transfers

DMA (Direct Memory Access)

  • - large data blocks transfer directly
  • - CPU interrupted only on completion
  • - much more efficient

4. Memory Hierarchy and Cache

Memory is organized as a hierarchy:

  1. Registers (Fastest, Smallest, Most Expensive)
  2. Cache
  3. Main memory
  4. Secondary storage (Slowest, Largest, Cheapest)

Main memory: volatile, directly used by the CPU.

Secondary storage: nonvolatile, used for files and long-term storage.

Cache memory:

  • stores frequently used data closer to the CPU
  • reduces the gap between processor speed and main memory speed
  • improves performance by exploiting locality of reference

5. Booting, BIOS, and UEFI

Booting is the process of starting the computer and loading the operating system.

Typical boot sequence:

  1. Power is supplied
  2. Firmware starts running
  3. Hardware is initialized and checked
  4. Bootloader is located
  5. Kernel is loaded into memory
  6. Operating system startup continues

BIOS

  • older firmware model
  • traditional startup system

UEFI

  • newer firmware model
  • faster boot & better security
  • supports larger disks
  • supports Secure Boot and modern boot mechanisms

6. Kernel, User Mode, and Kernel Mode

The kernel is the central core of the operating system. It is responsible for the most critical system functions and remains active while the system runs.

Typical kernel responsibilities:

Execution Modes

User mode:

  • used by normal applications
  • restricted privileges
  • cannot execute privileged instructions

Kernel mode:

  • used by the operating system
  • full hardware access
  • can execute privileged instructions

Hardware support that makes this protection possible:

7. OS Services

The OS provides services both for users/programs and for efficient system operation.

User/program-oriented:

user interface, program execution, I/O operations, file-system manipulation, communication, error detection

System-oriented:

resource allocation, accounting, protection and security

Additional support:

editors, debuggers, loaders, status/monitoring tools

8. System Calls and APIs

A system call is the programming interface through which a user program requests a service from the kernel.

  • process control, file & device management
  • information maintenance, communication, protection

Programs use APIs instead of raw calls:

Win32 API POSIX API Java API

Parameter passing techniques: registers, memory block/table, stack.

9. Multiprogramming vs Multiprocessing vs Time-Sharing

These concepts are easy to confuse, so they must be kept separate.

Multiprogramming:

  • one CPU
  • multiple processes/programs kept in memory
  • when one waits for I/O, CPU runs another
  • main goal: efficiency and CPU utilization

Multiprocessing:

  • multiple physical CPUs
  • true simultaneous execution
  • often discussed as SMP (Symmetric Multiprocessing): all processors share main memory and I/O facilities; enables real parallelism.

Time-sharing:

  • interactive extension of multiprogramming
  • processor time is shared among multiple users or tasks
  • main goal: responsiveness for users

10. Processes

A process is a program in execution.

A process includes:

  • current program counter, registers, variables
  • execution context, state
  • associated resources (files, memory)

Important distinction:

Program = passive entity

Process = active entity

Two running copies of the same program are two separate processes because they have different execution contexts.

11. Context Switch

A context switch happens when the CPU stops running one process and starts running another.

  1. current process state is saved
  2. next process state is loaded
  3. execution resumes from new process

Saved/restored items (stored in PCB):

registers, program counter, stack pointer, process state, scheduling info.

Important note:

A context switch is overhead. During the switch itself, the CPU is not doing useful work for user programs.

12. Process States

Running: using the CPU
Ready: waiting for CPU time
Blocked: waiting for external event

Typical transitions:

  • Running -> Blocked: (waits for input/event)
  • Running -> Ready: (scheduler preempts it)
  • Ready -> Running: (scheduler chooses it)
  • Blocked -> Ready: (awaited event occurs)

13. Process Table and PCB

The OS stores process information in a process table. Each entry is a PCB (Process Control Block), which typically contains:

  • process state
  • program counter
  • stack pointer
  • CPU registers
  • memory allocation
  • open-file info
  • accounting info
  • scheduling info

The PCB allows the OS to pause and later resume a process correctly.

14-16. Process Creation, Zombies, and Parallelism Traps

Process Creation in UNIX

  • fork() creates a child process as a copy of the parent.
  • exec() replaces the child’s memory image with a new program.

PID behavior after fork():

pid == 0 → currently in child

pid > 0 → currently in parent (value is child PID)

waitpid, Zombie & Orphan

  • waitpid(): allows the parent to wait for a child to finish (sync).
  • Zombie process: child terminated, parent hasn't collected exit status. Process entry remains in table.
  • Orphan process: parent terminates first, child continues to run.

Faulty Parallelism Trap

After fork(), both parent and child continue execution from the same point. Always check PID!

Unsafe pattern:

fork();
execl(...); // Might overwrite parent!

Correct pattern:

pid = fork();
if (pid == 0) {
    execl(...);
} else {
    // parent code
}

17. Memory Management, Paging, and Address Translation

Major goals:

  • process isolation
  • automatic allocation
  • modular programming
  • protection/access control
  • long-term storage support
Virtual memory:

Gives programs the illusion of a larger logical memory space. Allows programs to run even when not fully in RAM.

Paging:

Divides memory into fixed-size pages.
Virtual address = page number + offset

Addresses:

Logical: seen by process.
Physical: real RAM address.

MMU:

Hardware unit responsible for translating logical to physical addresses.

18. File, Storage & Device Management

File-system management:

create/delete/open/close/read/write files & directories, manage structures, backup support.

Storage management:

free-space management, storage allocation, disk scheduling.

Device management:

coordination via drivers/controllers, allocation, status tracking, synchronized hardware use.

19. Protection and Security

Protection:
Who may access which resource?
Security:
How is the system defended against threats?
Mechanisms:
  • access control
  • permissions
  • privileged instructions
Core Concepts:
  • Availability
  • Confidentiality
  • Data integrity
  • Authenticity

20-22. OS Architecture and Modules

Monolithic Kernel

  • File system, scheduler, drivers, and memory management all inside kernel space.
  • Usually fast (lower communication overhead).
  • Risk: a kernel bug can be catastrophic.

Microkernel

  • Kernel contains only minimal essential mechanisms (IPC, basic scheduling, address-space mgmt).
  • Other services run in user space as servers.
  • More modular, portable, and secure.
  • Trade-off: communication overhead makes it slower.

Modules & Hybrid Systems

Loadable kernel modules separate core components and load when needed. Modern systems are hybrid, balancing monolithic performance with microkernel security/modularity.

System Programs & Daemons

Users interact with system programs (compilers, file tools). Background services (daemons) start at boot to handle logging, printing, etc.

Final Summary & Clean Concept Map

Hardware foundation
CPU, registers, memory, I/O, interrupts, DMA, cache
OS role
intermediary, control program, resource allocator, kernel, services
Execution model
multiprogramming, time-sharing, multiprocessing
Process management
process model, states, PCB, context switch, scheduling, fork/exec, waitpid
Memory & protection
virtual memory, paging, logical vs physical, MMU, user vs kernel mode
OS structure
monolithic, microkernel, modules, hybrid systems

These slides together explain that a computer system is built from processor, memory, I/O, and storage hardware, and that the operating system is the protected software layer that manages all of these resources. The OS provides services, controls execution, handles interrupts, supports memory management, organizes files and devices, and protects the system. Processes are the main execution unit, and the OS must create, schedule, suspend, resume, and terminate them correctly. Modern operating-system design also depends on architecture choices such as monolithic kernels, microkernels, modularity, and multiprocessing.

⚡ Quick Memory Hooks

Multiprogramming one CPU, smart sharing
Multiprocessing (SMP) multiple CPUs, true parallelism
Context switch save/load process state via PCB
Paging divide logical memory into pages
MMU translates logical to physical address
fork/exec UNIX process creation
waitpid parent waits for child
Zombie child ended, parent didn't collect status
Orphan parent ended first
Monolithic vs Micro fast/coupled vs modular/slower