Cache is small relative to main memory.

Question

Cache is small relative to main memory. Because the cache is small relative to the main memory, only blocks of memory are brought into the cache and then replaced when necessary. The text discusses different ways by which blocks can be assigned and located in the cache.

a) What are the different ways called?
b) How are they the same?

Summary

Here in the question, we have to answer the questions. Where Cache is relative to main memory. And we have to state different ways of what cache is called. Also how their types are the same. We have already given a little information about the cache.

Explanation

a)

Basically, We find a total of three types of it.

Mapping types, Cache is relative to main memory.

We can map these which are used with the purpose so that we can use our cache memory. Such that we have different types of mapping. They are as follows.

⦁ Direct Mapping –

In this type of mapping, we have mapping will map one cache line possible to each block of main memory. Also, this has a memory block with the assignment of a particular line in the cache. So now if that block already holds some files or data then that data will get removed and new data or files are attached to it or we can sit that it holds the new data. Now there is an address space that is divide into two parts. Name as index field and tag field. The cache will use the tag field and the remaining in use by the main memory.

⦁ Associative Mapping –

This mapping technique is different from the previous one. It just not only stores the data but also the address of the memory. Also, any block of information can hold any line of the cache not necessarily be in a continuous line. So it also represents that it will need an ID to find them or to be identified in the memory. So that any word can be placed on any line in the cache memory. Such that it is known as the fastest and the most flexible mapping technique.

⦁ Set-associative Mapping –

This technique is also different from the previous two. But this technique is more useful than Direct mapping because all the drawbacks of Direct mapping are removed. So that it means that the problem of thrashing is eliminated. And we can say that in the Direct mapping we can only have one line of the block but here we can hold the group of lines to map n the cache. . Each word that is store in the cache memory by Set-associative mapping can have more than one word i.e. two or more words in the main memory address at the same index. So this technique is a combination of the previous two techniques.

 

b)

Now the second question answers how they are similar. So they have a similarity that each type will give or assign the data with the ID or a Key. And that they only have an address of the data. That data is in the main memory in the cache.

 

Also read, Make a story like a worker or game of using inheritance.

Share this post

Leave a Reply

Your email address will not be published. Required fields are marked *