Figure 4.7:
(a) A part of memory with five processes and three holes. The tick marks show the memory allocation units. The shaded region (0 in the bitmap) are free. (b) The corresponding bitmap. (c) The same information as a list.
|
- Memory allocation and freeing operations are partially predictable. Since the organization is hierarchical, the freeing operates in reverse (opposite) order (Stack Organization)
- Allocation and release of heap space is totally random. Heaps are used for allocation of arbitrary list structures and complex data organizations. As programs execute (and allocate and free structures), heap space will fill with holes (unallocated space, i.e., fragmentation) (Heap Organization)
- How do we know when memory can be freed? It is trivial when a memory block is used by one process
- However, this task becomes difficult when a block is shared (e.g., accessed through pointers) by several processes
- Two problems with reclaiming memory:
- Dangling pointers: occur when the original allocator frees a shared pointer.
- Memory leaks: occur when we forget to free storage, even when it will not or cannot be used again. This is a common and serious problem. Unacceptable in an OS.
- Memory Management with Bitmaps (see Fig. 4.7)
This technique divides memory into fixed-size blocks (e.g., sectors of 256-byte blocks) and keeps an array of bits (bit map), one bit for each block
- Memory Management with Linked Lists (see Fig. 4.7)
A free list keeps track of the unused memory. There are several algorithms that can be used, depending on the way the unused memory blocks are allocated (Dynamic Partitioning Placement Algorithm):
- First-fit: Allocate first hole that is big enough
- Scan the list for the first entry that fits
- If greater in size, break it into an allocated and free part
- May have many processes loaded in the front end of memory that must be searched over when trying to find a free block
- May have lots of unusable holes at the beginning.
- External fragmentation
- Next-fit
- Like first-fit, except it begins its search from the point in list where the last request succeeded instead of at the beginning.
- More often allocates a block of memory at the end of memory where the largest block is found
- The largest block of memory is broken up into smaller blocks
- Compaction is required to obtain a large block at the end of memory
- Simulations show it is slightly slower
- Best-fit: Allocate smallest hole that is big enough;
- Chooses the block that is closest in size to the request
- Poor performer
- Has to search complete list, unless ordered by size.
- Since smallest block is chosen for a process, the smallest amount of fragmentation is left memory compaction must be done more often
- Worst-fit: Allocate largest hole;
- Chooses the block that is largest in size (worst-fit)
- Idea is to leave a usable fragment left over
- Poor performer
- must also search entire list to find largest leftover hole (keep list in size order)
- Simulations show it is not a good idea
2004-05-25