- Computer programs must be in main memory (high speed semiconductor, also called random-access memory or RAM) to be executed. We say random access because the CPU can access any byte of storage in any order.
- Referred to as real memory or primary memory. Volatile, because its contents are lost when the power is removed.
- Interaction is achieved through a sequence of load or store instructions to specific memory addresses.
- The load instruction moves a word (collection of bytes, each word has its own address) from main memory to an internal register within the CPU, whereas the store instruction moves the content of a register to main memory.
- Aside from explicit loads and stores, the CPU automatically loads instructions from main memory for execution.
- A typical instruction-execution cycle, as executed on a system with a von Neumann architecture,
- First fetches an instruction from memory and stores that instruction in the instruction register.
- The instruction is then decoded and may cause operands to be fetched from memory and stored in some internal register.
- After the instruction on the operands has been executed, the result may be stored back in memory.
- Fetch-execute cycle (see Fig. 2.8)
Figure 2.8:
Fetch and Execute Cycle
|
- Program counter (PC) holds address of the instruction to be fetched next,
- The processor fetches the instruction from memory,
- Program counter is incremented after each fetch,
- Overlapped on modern architectures (pipelining).
- Notice that the memory unit sees only a stream of memory addresses; it does not know how they are generated (by the instruction counter, indexing, indirection, literal addresses, or some other.
- Ideally, we want the programs and data to reside in main memory permanently. This arrangement usually is not possible for the following two reasons:
- Main memory is usually too small to store all needed programs and. data permanently.
- Main memory is a volatile storage device that loses its contents when power is turned off or otherwise lost.
- Thus, most computer systems provide secondary storage as an extension of main memory. The most common secondary-storage device is a magnetic disk.
- Many programs then use the disk as both a source and a destination of the information for their processing. Hence, the proper management of disk storage is of central importance to a computer system.
- The main differences among the various storage systems lie in speed, cost, size, and volatility. The wide variety of storage systems in a computer system can be organized in a hierarchy (See Fig. 2.9) according to speed and cost. The higher levels are
expensive, but they are fast. As we move down the hierarchy, the cost per bit generally decreases, whereas the access time generally increases.
- Stages such as the CPU registers and cache are typically located within the CPU chip so distances are very short and buses can be made very very wide (e.g. 128-bits), yielding very fast speeds.
Figure 2.9:
A typical memory hierarchy. The numbers are very rough approximations.
|
- The design of a complete memory system must balance all the factors. It must use only as much expensive memory as necessary while providing as much inexpensive, nonvolatile memory as possible. Caches can be installed to improve performance where a large access-time or transfer-rate disparity exists between two components.
- Cache memory;
- Main memory should be, fast, abundant, cheap, Unfortunately, that's not the reality. Solution: combination of fast & expensive and slow & cheap memory (see Fig. 2.11left)
- Contain a small amount of very fast storage which holds a subset of the data held in the main memory.
- Processor first checks cache. If not found in cache, the block of memory containing the needed information is moved to the cache replacing some other data.
Figure 2.10:
Cache and Main Memory
|
Figure 2.11:
Left: Cache Memory. Right: (a) A quad-core chip with a shared L2 cache. (b) A quad-core chip with separate L2 caches.
|
- Cache design;
- Cache size, small caches have a significant impact on performance.
- Line size (block size), the unit of data exchanged between cache and main memory (see Fig. 2.11left).
- Cache Hit means the information was found in the cache. Larger line size
higher hit rate.
- Cache Miss ??
- Questions when dealing with cache:
- When to put a new item into the cache.
- Which cache line to put the new item in.
- Which item to remove from the cache when a slot is needed.
- Where to put a newly evicted item in the larger memory.
- Disk Cache
- A portion of main memory used as a buffer to temporarily to hold data for the disk.
- Some data written out may be referenced again. The data are retrieved rapidly from the software cache instead of slowly from disk.
- Future storage technology includes 3-dimensional crystal structures which allow optical access to a dense 3-dimensional storage facility (see Fig. 2.12).
Figure 2.12:
IBM Advanced Storage Roadmap.
|
Cem Ozdogan
2011-02-14