Figure 3:
(a) Without a standard driver interface (b) With a standard driver interface.
|
- There is commonality between drivers of similar classes
- Divide I/O software into device-dependent and device-independent I/O software
- Device independent software includes
- Buffer or Buffer-cache management
- Managing access to dedicated devices
- Error reporting
- Driver
Kernel Interface; Major Issue is uniform interfaces to devices and kernel
- Uniform device interface for kernel code
- Allows differents devices to be used the same way
- No need to rewrite filesystem to switch between SCSI, IDE or RAM disk
- Allows internal changes to device driver with fear of breaking kernel code
- Uniform kernel interface for device code
- Drivers use a defined interface to kernel services (e.g. kmalloc, install IRQ handler, etc.)
- Allows kernel to evolve without breaking existing drivers
- Together both uniform interfaces avoid a lot of programming implementing new interfaces
Figure 4:
(a) Unbuffered input (b) Buffering in user space (c) Single buffering in the kernel followed by copying to user space (d) Double buffering in the kernel.
|
- No Buffering (see Fig. 4)
- Process must read/write a device a byte/word at a time
- Each individual system call adds significant overhead
- Process must what until each I/O is complete
- Blocking/interrupt/waking adds to overhead.
- Many short runs of a process is inefficient (poor CPU cache temporal locality)
- User-level Buffering (see Fig. 4)
- Process specifies a memory buffer that incoming data is placed in until it fills
- Filling can be done by interrupt service routine
- Only a single system call, and block/wakeup per data buffer; Much more efficient
- Issues
- What happens if buffer is paged out to disk
- Could lose data while buffer is paged in
- Could lock buffer in memory (needed for DMA), however many processes doing I/O reduce RAM available for paging. Can cause deadlock as RAM is limited resource
- Consider write case, When is buffer available for re-use?
- Either process must block until potential slow device drains buffer
- or deal with asynchronous signals indicating buffer drained
- Single Buffer (see Fig. 4)
- Operating system assigns a buffer in main memory for an I/O request
- Stream-oriented
- Used a line at time
- User input from a terminal is one line at a time with carriage return signaling the end of the line
- Output to the terminal is one line at a time
- Block-oriented
- Input transfers made to buffer
- Block moved to user space when needed
- Another block is moved into the buffer; Read ahead
- User process can process one block of data while next block is read in
- Swapping can occur since input is taking place in system memory, not user memory
- Operating system keeps track of assignment of system buffers to user processes
- What happens if kernel buffer is full, the user buffer is swapped out, and more data is received??? We start to lose characters or drop network packets
- Double Buffer (see Fig. 4)
- Use two system buffers instead of one
- A process can transfer data to or from one buffer while the operating system empties or fills the other buffer
- May be insufficient for really bursty traffic
- Lots of application writes between long periods of computation
- Long periods of application computation while receiving data
- Might want to read-ahead more than a single block for disk
- Notice that buffering, double buffering are all Bounded-Buffer Producer-Consumer Problems
- Buffering in Fast Networks (see Fig. 5)
Figure 5:
Networking may involve many copies.
|
- Copying reduces performance; Especially if copy costs are similar to or greater than computation or transfer costs
- Super-fast networks put significant effort into achieving zero-copy
- Buffering also increases latency
2004-05-09