Recent years have seen the advent of new networking technologies such as Fast Ethernet, Gigabit Ethernet, and Asynchronous Transfer Mode, which have significantly increased the speed and bandwidth of computer networks. As the networks continue to evolve and new technologies arise, new challenges are posed to operating system designers. It is well-known in the industry today that although networks are capable of delivering data upwards of one Gigabit per second, desktop and workstation operating systems are incapable of delivering this data to the application layer with the same speed and efficiency. The operating system has become the bottleneck in terms of high-performance data delivery. This problem has been researched and explored in depth in past papers. The focus of this research is not to repeat past work.
This work instead explores the STREAMS subsystem, and analyzes it's capabilities for delivering high-performance data through the operating system. STREAMS is a facility for developing communication services in the operating system. It defines a processing model and a standard interface for character input/output within the kernel and between the kernel and the rest of the system. Network protocol stacks can be easily built and modified using the STREAMS architecture, by writing modules that have a standard interface, and plugging them dynamically into the kernel's data route. The flexibility that STREAMS provides in easily building network stacks is one of its main advantages. It is for this reason that STREAMS is the choice of many operating system vendors for implementing the Internet Protocols (TCP-UDP/IP) in their releases.
However, a typical design rule in any implementation is that flexibility and modularity don't come for free. There is some cost associated with this feature. One of the goals of this research was to determine if this cost outweighs STREAMS' advantages. Having access to very high-speed networks allows the possibility to push the STREAMS subsystem to its maximum potential, and make some conclusions about what kind of performance a STREAMS implementation can deliver over very high-speed links. A further related goal was to isolate the subsystem from the network itself and determine the maximum amount of data that can be pushed through it. This should give some theoretical upper bound on the STREAMS architecture.
While STREAMS is currently implemented in a number of current operating systems, it is not the only choice for implementing the network subsystem in an OS. The other main alternative is the BSD approach, developed originally at the University of California, Berkeley. A BSD-style stack differs mainly from the STREAMS stack in that it is a layered approach as opposed to a modular design. There have been claims that the BSD approach is much faster than a STREAMS-based stack. While the first section of this work discusses STREAMS performance over high-performance networks, one could logically argue that this information would possibly be rendered useless if there exists a far superior approach to operating system network code in terms of performance. In essence, why bother to use STREAMS at all if BSD is so much faster? The second part of this work explores this issue, in order to get a clear idea of how both stacks perform in terms of each other.
The future of computer networking is on the verge of many new and exciting innovations. One of the most significant is the design and implementation of IPv6, also termed IP Next Generation. IPv6 is a new network protocol with many new features built into it, one of which concerns the idea of defining specific flows of data that belong to a service level. The service level is more aptly known as Quality of Service, and it essentially gives priority to specific flows of data. There are many useful applications for QoS, mainly in the areas of audio and video traffic. However, once data reaches the network hardware of an end system, and enters the operating system, it is treated first-in-first-out as it travels up to the application layer. The STREAMS architecture has a built-in priority mechanism for passing data up and down the stack. The idea for the third part of this research was to explore a merging of the two concepts (QoS and operating system priority) to provide a complete application-to-application priority-based system. Thus, we look to STREAMS' capabilities for the way future networks are shaping.