Memory Hierarchy, Cache, Virtual Memory, DMA & I/O Organization – Complete Guide For GATE & UGC NET

Back To Page


  Category:  COMPUTER SCIENCE | 4th November 2025, Tuesday

techk.org, kaustub technologies

Memory Hierarchy, Cache, Virtual Memory, DMA & I/O Organization

Introduction

In Computer Architecture, memory Organization Plays A Crucial Role In Determining System Performance. The Processor’s Speed Is Increasing Rapidly, But Memory Access Time Has Not Kept Pace. To Bridge This Performance Gap, A Well-designed memory Hierarchy Is Implemented That Optimizes Speed, Cost, And Size. Concepts Like cache Memory, virtual Memory, Direct Memory Access (DMA), And I/O Organization Are Key Topics In Computer System Architecture And Often Form A Major Part Of Competitive Exams Such As GATE And UGC NET.

1. Memory Hierarchy

The memory Hierarchy Is A Structured Arrangement Of Memory Storage Devices Based On Their Access Time, Capacity, And Cost Per Bit. The Idea Is To Provide The Illusion Of A Large, Fast, And Inexpensive Memory System To The Processor.

The General Structure Of The Memory Hierarchy Is As Follows:

  1. Registers – Fastest And Smallest Memory Located Inside The CPU.

  2. Cache Memory – High-speed Memory Located Close To The CPU, Stores Frequently Accessed Data.

  3. Main Memory (RAM) – Larger But Slower Memory Used For Active Programs.

  4. Secondary Storage – Non-volatile Storage Like Hard Disks Or SSDs, Slower But Larger.

  5. Tertiary Storage – Backup And Archival Media Like Magnetic Tapes And Optical Disks.

Characteristics Of Memory Hierarchy

  • Speed Decreases From Top To Bottom.

  • Cost Per Bit Also Decreases As We Move Down.

  • Capacity Increases As We Go Down The Hierarchy.

  • Access Frequency Is Higher For Upper Levels (registers, Cache) And Lower For Bottom Levels.

Locality Of Reference

Programs Exhibit locality Of Reference, Meaning That They Access A Relatively Small Portion Of Memory Repeatedly. There Are Two Types:

  • Temporal Locality – Recently Accessed Data Is Likely To Be Accessed Again.

  • Spatial Locality – Data Located Near Recently Accessed Data Is Likely To Be Accessed Soon.

Memory Hierarchy Exploits These Localities To Improve Overall System Efficiency.

2. Cache Memory

Cache Memory Is A Small, Fast Memory Located Between The CPU And Main Memory. It Stores Copies Of Frequently Used Data Or Instructions To Reduce Average Memory Access Time.

Cache Mapping Techniques

To Manage Which Memory Blocks Are Stored In The Cache, Mapping Techniques Are Used:

  1. Direct Mapping

    • Each Block Of Main Memory Maps To Exactly One Cache Line.

    • Simple And Fast But Prone To Conflict Misses.

    • Formula:
      Cache Line = (Main Memory Block Number) MOD (Number Of Cache Lines)

  2. Associative Mapping

    • Any Block Can Be Placed In Any Cache Line.

    • Flexible But Expensive Due To Complex Searching.

    • Uses Parallel Comparison Hardware.

  3. Set-Associative Mapping

    • A Compromise Between Direct And Associative Mapping.

    • Cache Is Divided Into Sets, And Each Set Has Multiple Lines.

    • Each Block Maps To A Specific Set, But Can Go In Any Line Within That Set (e.g., 2-way, 4-way Set Associative).

Cache Replacement Policies

When A New Block Needs To Be Loaded And The Cache Is Full, One Block Must Be Replaced. The Common Replacement Algorithms Are:

  • Least Recently Used (LRU)

  • First-In First-Out (FIFO)

  • Random Replacement

Write Policies

When The CPU Writes Data, The Cache And Main Memory Must Stay Consistent.

  • Write-Through: Data Written To Both Cache And Main Memory Simultaneously.

  • Write-Back: Data Written Only To Cache; Main Memory Is Updated Later When The Block Is Replaced.

Cache Performance

Cache Performance Is Measured Using The hit Ratio:
[
\text{Hit Ratio} = \frac{\text{Number Of Cache Hits}}{\text{Total Memory Accesses}}
]
A High Hit Ratio Means Better Performance Since Most Data Requests Are Served From The Cache.

3. Virtual Memory

Virtual Memory Is A Memory Management Technique That Provides An “illusion” Of A Large Main Memory By Using A Portion Of The Secondary Storage (usually A Hard Disk) As An Extension Of RAM. This Allows Systems To Run Large Programs Or Multiple Processes Even If Physical Memory Is Limited.

Concept

Each Program Is Given A logical Address Space (virtual Address Space) Which Is Larger Than The Physical Address Space. The Memory Management Unit (MMU) Translates Virtual Addresses To Physical Addresses During Program Execution.

Paging

Virtual Memory Is Divided Into Fixed-size Blocks Called pages, And Physical Memory Is Divided Into frames Of The Same Size.

  • When A Program Needs A Page That Is Not In Main Memory, A page Fault Occurs.

  • The Operating System Brings The Required Page From The Disk Into A Free Frame.

  • If Memory Is Full, A Page Replacement Algorithm Is Used.

Page Replacement Algorithms

  1. FIFO (First-In First-Out) – The Oldest Page Is Replaced.

  2. LRU (Least Recently Used) – The Page Least Recently Accessed Is Replaced.

  3. Optimal (OPT) – The Page That Will Not Be Used For The Longest Time In The Future (used Theoretically For Comparison).

Advantages Of Virtual Memory

  • Allows Large Programs To Run.

  • Provides Isolation Between Processes.

  • Increases Degree Of Multiprogramming.

Disadvantages

  • Slower Due To Disk Access.

  • Page Thrashing Can Occur If The System Keeps Swapping Pages Excessively.

4. Direct Memory Access (DMA)

When Input/output (I/O) Devices Transfer Data To Or From Memory, They Typically Involve The CPU. However, For High-speed Devices Like Disks Or Network Interfaces, CPU Involvement In Every Transfer Would Be Inefficient. Direct Memory Access (DMA) Solves This Problem.

Concept Of DMA

DMA Allows Certain Hardware Subsystems To Access Main Memory Directly, Bypassing The CPU. It Improves Data Transfer Efficiency Between I/O Devices And Memory.

Components

  • DMA Controller (DMAC): A Hardware Component That Manages DMA Operations.

  • Registers In DMAC:

    • Address Register: Holds The Starting Address Of Memory Where Data Will Be Read/written.

    • Count Register: Holds The Number Of Bytes To Transfer.

    • Control Register: Contains Control Bits (read/write Mode, Device ID, Interrupt Enable, Etc.)

DMA Operation Steps

  1. CPU Initializes The DMA Controller With The Required Parameters.

  2. DMA Takes Over The System Bus To Transfer Data Directly Between I/O And Memory.

  3. Once The Transfer Completes, DMA Sends An Interrupt To Notify The CPU.

Advantages

  • Reduces CPU Overhead.

  • Increases I/O Throughput.

  • Allows Parallelism Between CPU Computation And Data Transfer.

Modes Of DMA Transfer

  1. Burst Mode (Block Mode): Entire Data Block Is Transferred In One Go.

  2. Cycle Stealing Mode: DMA Steals One Bus Cycle At A Time.

  3. Transparent Mode: DMA Transfers Occur Only When The CPU Is Not Using The Bus.

5. I/O Organization

Input/Output Organization Deals With How Data Is Transferred Between CPU And Peripheral Devices. Since I/O Devices Are Slower Than The CPU, Efficient Coordination Is Essential.

I/O Techniques

  1. Programmed I/O

    • CPU Is Responsible For Data Transfer.

    • It Continuously Checks (polls) Device Status Until It Is Ready.

    • Simple But Inefficient As CPU Remains Busy.

  2. Interrupt-Driven I/O

    • Device Sends An interrupt To The CPU When It Is Ready For Data Transfer.

    • CPU Performs Other Tasks In The Meantime.

    • More Efficient Than Programmed I/O.

  3. DMA (Direct Memory Access)

    • As Discussed, DMA Handles Data Transfer Without CPU Intervention.

    • Used For High-speed Devices Like Disks.

I/O Interface And Controllers

An I/O Interface Connects Peripheral Devices To The CPU And Memory. It Consists Of:

  • Data Register: Temporarily Holds Data.

  • Status Register: Indicates Device Status (busy/ready/error).

  • Control Register: Contains Control Commands For The Device.

I/O Addressing

  • Isolated I/O (Port-mapped I/O): Separate Address Space For I/O Devices.

  • Memory-Mapped I/O: I/O Devices Share The Same Address Space As Memory, Allowing Normal Read/write Instructions To Access I/O.

6. Performance Considerations And Summary

Component Access Time Capacity Cost Per Bit Managed By
CPU Registers 1 Ns Very Low Very High Compiler
Cache Memory 1–10 Ns Low High Hardware
Main Memory 50–100 Ns Medium Moderate OS
Secondary Storage 1–10 Ms Very High Low OS / User

The Memory Hierarchy Aims To Minimize The average Access Time:
[
T_{avg} = H \times T_c + (1 - H) \times T_m
]
where
( H ) = Hit Ratio,
( T_c ) = Cache Access Time,
( T_m ) = Main Memory Access Time.

Efficient cache Management, paging Algorithms, DMA Usage, And I/O Coordination Contribute To The Overall System Performance By Reducing CPU Idle Time And Ensuring Smooth Data Flow.

Conclusion

Understanding Memory Hierarchy, Cache Memory, Virtual Memory, DMA, And I/O Organization Is Fundamental To Mastering Computer Architecture. These Components Collectively Ensure That Processors Can Work Efficiently Despite The Inherent Speed Differences Between CPU And Memory Or I/O Devices. For Competitive Exams Like GATE And UGC NET, These Topics Are Frequently Tested Through Both Conceptual And Numerical Questions. A Solid Grasp Of Mapping Techniques, Replacement Algorithms, And Data Transfer Modes Not Only Aids In Examinations But Also Provides A Deeper Understanding Of Real-world Computer Systems.

Tags:
Memory Hierarchy, Cache Memory, Virtual Memory, DMA, I/O Organization, GATE CSE, UGC NET Computer Science, Page Replacement, Cache Mapping, Direct Memory Access, Computer Architecture.

Links 1 Links 2 Products Pages Follow Us
Home Founder Gallery Contact Us
About Us MSME CouponPat Sitemap
Cookies Privacy Policy Kaustub Study Institute
Disclaimer Terms of Service