Mapping Functions in Cache Memory

Associative Mapping Vs Direct Mapping

4 min readDec 2, 2021

--

Cache memory image
Cache Memory

Cache Memory:

Cache Memory is a special very high-speed memory. It is used to speed up and synchronize with a high-speed CPU. Cache memory is costlier than main memory or disk memory but economical than CPU registers. Cache memory is an extremely fast memory type that acts as a buffer between RAM and the CPU. It holds frequently requested data and instructions so that they are immediately available to the CPU when needed.

Cache memory is used to reduce the average time to access data from the Main memory. The cache is a smaller and faster memory that stores copies of the data from frequently used main memory locations. There are various different independent caches in a CPU, which store instructions and data.

What is Cache Mapping (Mapping Functions):

Cache mapping defines how a block of data from the main memory is mapped to the cache memory in case of a cache miss.

OR

Cache mapping is a technique by which the contents in the main memory are brought into the cache memory if there’s a cache miss.

Cache mapping is performed using the following 3 techniques:

  1. Associative Mapping
  2. Direct Mapping
  3. K-way Set Associative Mapping

We’ll discuss the first two and find the advantages and disadvantages of each technique:

Associative Mapping:

In this type of mapping, the associative memory is used to store content and addresses of the memory word. Any block can go into any line of the cache. This means that the word id bits are used to identify which word in the block is needed, but the tag becomes all of the remaining bits. This enables the placement of any word at any place in the cache memory. It is considered to be the fastest and the most flexible mapping form.

This mapping method is also known as fully associative mapping.

Advantages of Associative Mapping:

  • Associative mapping is fast.
  • There is flexibility when mapping a block to any line of the cache
  • Associative mapping is easy to implement.
  • Higher Hit rate for the same cache size.
  • Fewer Conflict Misses.
  • Can have a larger cache but keep the index smaller

Disadvantages of Associative Mapping:

  • Cache Memory implementing associative mapping is expensive as it requires storing addresses along with the data.
  • A replacement algorithm must be used to determine which line of cache to swap out.
  • More space is needed for the tag field.
  • The most important disadvantage is the complex circuitry needed to examine all of the tags in parallel in the cache.

Direct Mapping:

The simplest technique, known as direct mapping, maps each block of main memory into only one possible cache line. or
In Direct mapping, assign each memory block to a specific line in the cache. If a line is previously taken up by a memory block when a new block needs to be loaded, the old block is trashed. An address space is split into two parts index field and a tag field. The cache is used to store the tag field whereas the rest is stored in the main memory. Direct mapping`s performance is directly proportional to the Hit ratio.

For purposes of cache access, each main memory address can be viewed as consisting of three fields. The least significant w bits identify a unique word or byte within a block of main memory. In most contemporary machines, the address is at the byte level. The remaining bits specify one of the 2s blocks of main memory. The cache logic interprets these s bits as a tag of s-r bits (most significant portion) and a line field of r bits. This latter field identifies one of the m=2r lines of the cache.

Advantages of Direct Mapping:

  • Direct mapping is the simplest type of cache memory mapping.
  • Here only the tag field is required to match while searching word that is why it is the fastest cache.
  • Replacement is straightforward.
  • It does not require any search technique to find a block in the cache.
  • Direct mapping cache is less expensive compared to associative cache mapping.

Disadvantages of Direct Mapping:

  • The performance of the direct mapping cache is not good as requires replacement for data-tag value.
  • The hit ratio will be low in this technique.
  • Each block of main memory maps to a fixed location in the cache; therefore, if two different blocks map to the same location in cache and they are continually referenced, the two blocks will be continually swapped in and out (known as thrashing).
  • There may be some empty blocks in the cache.
  • Poor cache utilization.

Conclusion:

Set associative cache mapping combines the best of direct and associative cache mapping techniques. So the problems in both above techniques are solved in Set Associative Mapping.

References:

Computer Organization and Design: the Hardware/Software Interface: Third Edition by David A Patterson and John L. Hennessy

GeeksforGeeks

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

Bilal Khan
Bilal Khan

Responses (1)

Write a response