Enhanced Transmission Selection

Enhanced Transmission Selection(ETS) is related to the sharing of the communication medium among different types of traffic. Each type of traffic has its own issues, such as storage traffic needs no package loss and real-time applications demands a low delay network. Sharing effectively the Ethernet bandwith among the different kinds of traffic is the task of the ETS.

Some Basic Concepts

Before starting the discussion about the tansmission selection, it is desirable that some basic concepts of package(frame) forwarding be shortly explained. The transmission of packets through forwarding devices is a important process on network architecture and its implementation is performed by specific hardware structures. In the Ethernet pattern, Ethernet switches are incharged of implementing the forwarding process. The frames are stored in buffers when they arrive trough a input interface. The switch CPU grabs the frame that is at the bottom of the buffer (the first that had arrived), discovers the output interface that it should be sended, and transmits it to the destination.

When QoS is demanded, a software structure to classify and organize the transmission must be used. This structure is a priority queue and it will define the frame transmission order using the classes of service as priority criteria. Each Class of Service will have a specific queue and when a frame arrives, it will be abstracted to its own class queue and the class's queues will be served as the importance of its data.

As a electronic device, the buffers has a finite length, and when the buffer is full, every frame new that arrives is lost. This effect is known as taildrop effect, and the flow control mechanism already described helps to prevent it.

A important queue rule to be remarked is the Round Robin scheduling. When multiple queues must be served through a service, this rule defines a circular policy to be followed. Each queue will be served at time and a queue will be served again after all the other queues are served, ensuring a fair distribution of service. Although, when priority must be introduced in the serving process, the Weighted Round Robin scheduling modifies this process to fits in. In this, the priority is traduced to a number of elements served in each queue. The bigger the priority, the bigger the number of elements served when a queue is selected. These and other ideas are used in the forwarding process of switches.

The Usual Transmission Selection

The Transmission Selection were initially used on layer three past technologies and consists in the determination of which packet must be forwarded next by a hop(router). The choice of the packet is guided by its type of traffic, due to its own particularities. Backing to layer two, the Ethernet switches must perform Transmission Selection to forward the frames of the multiple input interfaces. To deal with this tasks, the queuing theory is used to provided scheduling algorithms. When there are no specification, the forwarding process follow the Round Robin queue rule. Although, when frames of different type of traffic arrive trough the multiple interfaces, the selection of the next one to be forwarded should care about priority, and not only about the buffer that contains the frame.

Generally speaking, the usual process is to use specific queues to each type of service, and then serving these queues depending on defined weights. The queues can be visualized as software abstractions of the organization of frames contained in buffers. The different types of scheduling used will differ the bandwidth usage through priority classes. As example, the priority queuing will give total bandwidth to the class of service with the highest priority until its queue be empty, in the other hand, in the Weighted Round Robin scheduling, the bandwidth is divided proportionally to each class of service based on priority values.

Altough the usual transmission selection works, the algorithms used defines static priority in the implementation, which can be inconvenient in DCB. In data centers, the importance of the data types may changes during different situations and the bandwidth usage of this types may change together. In this scenario, the transmission selection should also be able of a dynamically bandwidth allocation when its needed. This particularity of DCs introduces the necessity of a more dynamic way of transmission selection.

The Enhancement

The modification proposed to the transmission selection is focused on promoving dynamic bandwidth allocation to the multiple classes of traffic. In this modification, the classes of traffic are defined, and to a switch execute ETS, it must be capable of setting at least three classes of traffic. For each class of traffic are defined a proper queue and a proper queue scheduling process, based on its characteristics.

At the minimum requirement, the ETS recognizes three types of service, with a theoretical maximum of 256. This three types are usually the types that demands: strictly priority queuing policy, such as VoIP application traffic; a maximum latency value, such as video and audio streaming; a dynamic bandwidth allocation to obtain a loss reduced path such as applications based on best-effort and FCoE traffic.

In this three-type mechanism, the queues will be served with the same order that they were presented. The strictly priority queue will use all the bandwidth avaible until its empty(this depends of the implementation). Latter, the latency-sensible traffic queue will be served as its needs, and after this, the lefting bandwidth will be dynamically allocated to the last queue elements. With this serving policy, each type of traffic will have the bandwidth usage to fits in its necessity. As a final remark to ETS, the enhancement can be visualized as the union of various types of scheduling to promove QoS.

Withdraw from [4]. This figure shows the bandwidth allocation based on the types of traffic varying with time. As told, the variation of the bandwidth usage aims to fits to the peculiarity of each type of traffic.