In computer networking, QoS refers to a broad set of networking technologies and mechanisms designed to guarantee predictable levels of network performance. QoS is an acronym of Quality of Service. It points to traffic prioritization and resource reservation control mechanisms more than the achieved service quality.

It provides different priority to different applications, users, or data flows, or to guarantee a certain level of performance to a data flow.

Several related aspects of the network service help in quantitatively measuring the quality of service. That is, dedicated bandwidth, controlled jitter, low latency, and improved packet loss characterize good network performance.

The deployment of QoS is prominent in critical and lag-sensitive applications such as video-on-demand, voice over IP (VoIP) systems, etc.

To understand the value of QoS in computer networks, we will explore the different QoS mechanisms and implementation techniques. We will also briefly discuss the QoS parameters and how it can benefit our overall networking experience.

What is QoS in Computer Networks?

QoS is an advanced ability of networking that allows network traffic for specific applications or types of traffic to be “prioritized” over others.

In simple words, it means that the network traffic for a voice or video call is given priority over a movie download or massive file transfer as it passes through the network.

Imagine a counter where people can purchase movie tickets. But, the counter has a special reservation that allows some people to stand in a separate row.

Thus, they can get tickets faster than others. That means these people have more priority than other people in different rows.

Analogically, replace the people with data packets. QoS provides some data packets the privilege to be thrown on the network first compared to the other data packets.

QoS meets the traffic requirements of sensitive applications, such as real-time voice and video, and prevents the degradation of overall quality due to delay, jitter, etc.

What are the parameters to measure QoS?

Parameters describe QoS. Defining a QoS parameter indicates how to measure or determine its value. The below parameters help in quantitative measurement of QoS:

Bandwidth

Bandwidth refers to the speed of the link, in bits per second (bps).

It refers to the capacity of a network communications link to transfer the maximum amount of data from one node to another in a specified amount of time.

Latency (Delay)

Latency, also known as the delay is the time it takes a packet to travel from one designated node to other.

Latency should be as low as possible. A voice over IP call having a high amount of delay can experience echo and overlapping audio.

Jitter

Jitter refers to the variation of one-way delay in a stream of packets.

For instance, let’s say an IP phone sends a steady stream of voice packets. Because of congestion in the network, delay in some packets occurs.

Assume, the delay between packet 1 and 2 is 30 ms, the delay between packet 2 and 3 is 60 ms, the delay between packet 3 and 4 is 10 ms, etc. This delay will degrade the quality of voice communication.

Jitter occurs due to network congestion, timing drift, and route changes.

Packet Loss

Packet Loss refers to the amount of lost data, typically shown as a percentage of lost packets transmitted. If you send 100 packets and only 95 reach the destination, that means there is a 5% packet loss.

There is always a high possibility of Packet loss to occur. For instance, when there is congestion, packets will be queued, but once the queue is full, dropping of packets starts.

With QoS, we at least have the ability to decide which packets to drop.

Why is QoS needed?

If your network has sufficient bandwidth and no traffic that bursts above what it can handle, there will not be any problem of packet loss, jitter or delay.

But in many enterprise networks, sometimes links become overly congested to the point where routers and switches start dropping packets.

It happens because the packets are coming in/out faster as compared to their processing speed. Thus, your streaming applications will suffer. That’s where the need for QoS arises.

QoS is especially important for critical and lag-sensitive applications such as video-on-demand, voice over IP (VoIP) systems, etc. where high-performance and high-quality streaming is involved.

For instance, Voice over IP (VoIP) requires lag-free transmission, whereas email can tolerate lags.

If you have to delay email packets for a few seconds to let VoIP packets flow freely, it will cost you nothing, and it also results in better VoIP QoS.

With the emergence of new applications having stricter network performance requirements than VoIP’s, QoS capabilities are becoming more critical than ever.

Control over resources

One of the significant advantages of deploying QoS is that you can have control over different resources that are in use.

For instance, you can limit the bandwidth consumed over a backbone link by FTP transfers or give priority to important database access.

Serving Critical applications and services

Most organizations have a production and testing server for an app.

When a new application version is rolled out, they will be assigned to the testers to test the functionality of the application before making them available in the market.

However, if the applications which are already in the market are using your server, then you cannot compromise with the server’s speed.

Thus, in this case, keeping running the server becomes critical, and on QoS basis, you can prioritize the server data and traffic.

How can you implement QoS? – QoS Service models

The goal of building a QoS enabled network architecture is to bring together end hosts closer by increasing performance and reducing the delay of the underlying network.

For this, the network should implement service models so that services are specific to the traffic they serve.

Three models exist to implement QoS. Let’s discuss them one by one.

Best Effort

Best Effort refers to a QoS model where all the packets get the same priority, and there is no guaranteed delivery of packets.

It is applied when networks have not configured QoS policies or when the infrastructure does not support QoS.

The network components try their level best to send the packets to their destination without any bounds on delay, latency, jitter, etc. But they give up if they cannot, and without informing either the sender or the recipient.

An example of this service is delivered by the current day IP networks.

Integrated Services

Integrated Services or IntServ is a QoS model that reserves bandwidth along a specific path on the network.

Applications request the network for resource reservation, and network devices monitor the flow of packets to ensure network resources can receive the packets.

Implementing IntServ demands IntServ-capable routers and utilizes the Resource Reservation Protocol (RSVP) for network resource reservation.

IntServ includes limited scalability and high consumption of network resources.

Differentiated Services

Differentiated Services or DiffServ refers to a QoS model where network components, like routers and switches, are configured to service several classes of traffic with different priorities.

Based on an organization’s requirements, network traffic must be divided. For instance, voice traffic can get a higher priority than other types of traffic.

Packets are assigned priorities using Differentiated Services Code Point (DSCP) for classification.

DiffServ also utilizes per-hop behavior to implement QoS techniques, such as queuing and prioritization, to packets.

How does QoS work? – QoS mechanisms

Quality of Service (QoS) is a suite of technologies used to manage bandwidth usage as data crosses computer networks.

QoS technologies, or tools, each have specific roles used in conjunction with one another to build end-to-end network QoS policies.

QoS mechanisms fall under specific categories based on their functions in managing the network. Let’s see below.

Classification and Marking

It is essential first to identify traffic to provide preferential service to a particular type of traffic. After that, marking of the packet may or may not take place.  These two tasks make up the classification.

Classification identifies and marks traffic to ensure network devices know how to identify and prioritize data as it traverses a network.

It marks each packet as a member of a network class, which allows devices on the network to determine the packet’s class.

Cases where the packet identification happens but no marking, classification is said to be on a per-hop basis. It occurs when the classification pertains only to the device that it is on, not passed to the next router.

Standard methods of identifying flows include access control lists (ACLs), policy-based routing, committed access rate (CAR), and network-based application recognition (NBAR).

Congestion Management

Due to the bursty nature of some data traffics, sometimes the amount of traffic crosses the speed of a link.

Now, what can the router do? Can it buffer traffic in a single queue and allow the first packet in be the first packet out? Or, can it put packets into different queues and service-specific queues more often?

Congestion-management tools address these questions.

Congestion management tools utilize packet classification and marking to determine in which queue they should place the packets.

These tools include priority queuing (PQ), custom queuing (CQ), weighted fair queuing (WFQ), and class-based weighted fair queuing (CBWFQ).

Congestion Avoidance

Congestion avoidance mechanism monitors network traffic for congestion and will drop low-priority packets when congestion happens.

As the size of queues is limited, they can fill and overflow. Any additional packets cannot enter a queue that is full, thus resulting in dropping, known as the tail drop.

The problem with tail drops is that the router cannot prevent the dropping of this additional packet, even if it is a high-priority packet. That’s why a mechanism is necessary to perform two things.

  1. Try to ensure that the queue does not fill up so that there is space for high-priority packets
  2. Allow some criteria for dropping packets that are of lower priority before dropping higher-priority packets

Weighted random early detection (WRED) is the congestion avoidance tool that provides both of these mechanisms.

Shaping and Policing

Policing and shaping are common QoS technologies that limit the bandwidth utilized by administratively defined traffic types.

Shaping refers to a software set limit on the bandwidth transfer rate for a class of data.

Generic Traffic Shaping, Buffers, and Frame Relay Traffic Shaping are some of the Traffic Shaping tools.

These tools manipulate traffic entering the network and prioritize real-time applications over less time-sensitive applications such as email and messaging.

Just like shaping, traffic policing tools focus on throttling excess traffic and dropping packets. It enforces bandwidth to a specified limit.

If applications try to utilize more than the allocated bandwidth, their traffic will be remarked or dropped. However, unlike shaping no buffering happens in policing.

Link Efficiency

Link efficiency tools increase bandwidth use and reduce delay for packets accessing the network.

Real-Time Transport Protocol header compression, Link compression, and Transmission Control Protocol header compression are some of the Link efficiency tools. These tools limit large flows to show a preference for small flows.

Are there any Issues with QoS?

Some core networking technologies such as Ethernet is not designed to support prioritized traffic or guaranteed performance levels. Thus, making it much more challenging to implement QoS solutions across the web.

Users can maintain full control over QoS on their home network. However, they are dependent on their Internet service provider for QoS choices made at the global level.

Users can logically have concerns with providers having the high degree of control over their traffic that QoS offers.

Automatic QoS can cause undesirable side effects that excessively impact the performance of necessary priority traffic by over-prioritizing traffic at a higher tier.

Thus, it can be technically tricky for untrained administrators to implement and tune.

Conclusion

Delivering sufficient Quality of Service (QoS) across IP networks is becoming an increasingly important aspect of today’s enterprise IT infrastructure. Not only is QoS essential for voice and video streaming over the network, but it is also an important factor in supporting the emerging Internet of Things (IoT).

Our connectivity needs continue to expand into all aspects of our personal and business lives. QoS fulfills these needs by ensuring that specific data streams are given priority over others to operate efficiently.

However, QoS can be a complicated and tedious set of technologies to deploy and validate.

Implementing QoS involves a wide range of functions from monitoring and analysis to configuration and testing. Before using QoS, sound planning of the available tools is necessary for performing the critical tasks involved.

QoS refers to a broad set of networking technologies and mechanisms designed to guarantee predictable levels of network performance. QoS is an acronym of Quality of Service.