Mapping functions.md (1669B)
1 +++ 2 title = 'Mapping functions' 3 +++ 4 # Mapping functions 5 consider (with 16-bit addressable memory): 6 cache — 128 blocks of 16 words each = 2048 words (2K) 7 main memory — 4096 blocks of 16 words each = 64K words 8 9 valid bit — 0 when power first turned on. set to 1 when a memory block is loaded into a location. the processor only fetches data from a cache block if the valid bit is 1. 10 11 cache flushing — forcing all dirty blocks to be written back to memory 12 13 Direct mapping: 14 15 - block *j* of main memory is block *j* modulo 128 of cache 16 - e.g. block 0, 128, 256… of memory is stored in cache block 0 17 - e.g. block 1, 129, 257… of memory is stored in cache block 1 18 - etc. 19 - replacement algorithm is trivial — placement of block is determined by its memory address (three fields tag, block, word) 20 21 Associative mapping: 22 23 - a main memory block can be placed into any cache block position 24 - 12 tag bits identify a memory block in the cache 25 - new block replaces an existing block only if cache is full 26 - more efficient use of space, but higher complexity because of need for parallel tag search (associative search) 27 28 Set-associative mapping 29 30 - blocks of cache are grouped into sets, mapping allows block of main memory to be in any block of a specific set 31 - gets rid of problem of contention because of a few choices for block placement 32 - hardware cost reduced because smaller associative search 33 34 ## Replacement algorithms 35 LRU replacement algorithm 36 37 - overwrite least recently used block (the one that’s gone the longest without being referenced) 38 - cache controller must track references to all blocks 39 40 Random algorithm — quite effective in practice