Quality of Service and the Internet Content from the guide to life, the universe and everything

Quality of Service and the Internet

0 Conversations

The Internet's business is transferring information around the world and, by and large, it does this pretty well. However, most of the Internet is provided on a best effort basis - that is, the routers between your computer and the computer you wish to converse with will do their best to get your packets where you want them to go, but make absolutely no promises. Trying to get these promises is the business of Quality of Service or QoS. Note that Quality of Service is a piece of jargon that has a very specific meaning, which is nothing to do with the smile level of the lovely people manning the help desk. It's all about making guarantees on what will happen, and being able to prove that those guarantees will hold.

The Problems

Back when the Internet was created, nobody saw the need or the feasibility of QoS applications, so the whole thing runs on a 'best effort' system. There were eight 'type of service' bits provided in each message, but they were largely unused. There are many nasty things that can happen to packets as they wing their way from place to place, and they result in the following problems, as seen from the point of view of the sender and receiver:

  • Drop rate - the routers might drop packets if they arrive when their buffers are already full. All of the packets might be dropped, or none of them, depending on the state of the network, and there's no way to tell which in advance. The receiving application must ask for this information to be re-transmitted, and this will often cause a severe hiccup in transmission.

  • Delay - it might take a long time for your packet to reach its destination, because it gets held up in long queues, or takes a more indirect route to avoid congestion. Alternatively, it might be very fast, there is no way to predict which in advance.

  • Order - the Internet often delivers packets in a different order to the order that they were sent, because they get sent on different routes, which requires special protocols to avoid having your website displayed upside down, or with the introduction right at the bottom, or similar errors.

  • Error - sometimes packets are misdirected, or mixed up together, or corrupted, while en route. The receiver has to detect this and, just as if the packet was dropped, ask the sender to repeat itself.

Types of Application

These problems are minor when transferring a fairly large file, because the process will take a long time anyway, and there is no particular urgency. Similarly, if a website is delayed a little it still displays the same information, which presumably is helpful. These applications are called elastic because they can take advantage of however much or little bandwidth is available.

Other applications are inelastic, that is, they require a certain level of bandwidth to function. If they get more than that they can't use it, and if they get less, then they can't function at all. Some examples of inelastic applications are:

  • TV and radio - you need some kind of upper bound on delay to keep buffering practical. This is normally referred to as soft QoS because if a frame of data is missed it won't be the end of the world, though it will cause a jitter. One aspect of video is that information which arrives a millisecond late is of no value whatsoever, whereas for websites, late information is better than no information at all.

  • Conferencing and telephony1 - this is another form of soft QoS, but it differs from the above in that there is a multi-way flow of information, rather than one-way; the information needs to be broadcast to all participants in the virtual conference.

  • Remote surgery - this is a form of hard QoS, where failure to provide the minimum service required will likely lead to death or losses worth millions of pounds. In particular, we need low latency2, low jitter, and very low error rates.


There are essentially two ways to provide QoS guarantees. The first is simply to provide lots of resources, enough to meet the expected peak demand. This is nice and simple, but it tends to be expensive in practice, and can't cope if the peak demand increases faster than predicted: deploying the extra resources takes time.

The second one, and the only one that makes sense for the Internet, is to require people to make reservations, and only accept the reservations if the routers are able to serve them reliably. Naturally, you can then charge people money for making reservations! There are two popular variations on this: IntServ and DiffServ.

Integrated Services (IntServ)

IntServ is a fine-grained reservation system, as opposed to DiffServ's coarse-grained system. The idea is that every router in the system implements IntServ, and every application that requires some kind of guarantees has to make an individual reservation. 'Flow Specs' describe what the reservation is for, while 'RSVP' is the underlying mechanism for making them. RSVP stands for Répondez S'il Vous Plait, in French.

Flow Specifications

There are two parts to a flow spec:

  • What does the traffic look like? Done in the Traffic SPECification or TSPEC part.

  • What guarantees does it need? Done in the service Request SPECification or RSPEC part.

To understand TSPECs, you have to understand a token bucket filter. The idea is that there is a big bucket which slowly fills up with tokens, arriving at some constant rate. Every packet which is sent requires a token, and if there are no tokens, then it cannot be sent. Thus, the rate at which tokens arrive dictates the average rate of traffic flow, while the depth of the bucket dictates how 'bursty' the traffic is allowed to be.

TSPECs typically just specify the token rate and the bucket depth. For example, a video with a refresh rate of 75 frames per second, with each frame taking ten packets, might specify a token rate of 750Hz, and a bucket depth of only ten. The bucket depth would be sufficient to accommodate the 'burst' associated with sending an entire frame all at once. On the other hand, a conversation would need a lower token rate, but a much higher bucket depth. This is because there are often pauses in conversations, so they can make do with fewer tokens by not sending the gaps between words and sentences. However, this means the bucket depth needs increasing to compensate for the traffic being 'burstier'.

RSPECs specify what requirements there are for the flow: it can be normal Internet 'best effort', in which case no reservation is needed. This setting is likely to be used for Web pages, FTP, and similar applications. The 'Controlled Load' setting mirrors the performance of a lightly loaded network: there may be occasional glitches when two people access the same resource by chance, but generally both delay and drop rate are fairly constant at the desired rate. This setting is likely to be used by soft QoS applications. The 'Guaranteed' setting gives an absolutely bounded service, where the delay is promised to never go above a desired amount, and packets never dropped, provided the traffic stays within spec.


RSVP is described in RFC 2205 (RFC stands for Request For Comments - for historical reasons, many Internet protocols are defined in RFCs). All machines on the network capable of sending QoS data send a PATH message every 30 seconds, which spreads out through the network. Those who want to listen to them send a corresponding RESV (short for 'Reserve') message which then traces the path backwards to the sender. The RESV message contains the flow specs.

The routers between the sender and listener have to decide if they can support the reservation being requested, and if they cannot then send a reject message to let the listener know about it. Otherwise, once they accept the reservation they have to carry the traffic.

The routers then store the nature of the flow, and also police it. This is all done in soft state, so if nothing is heard for a certain length of time, then the reader will time out and the reservation will be cancelled. This solves the problem if either the sender or the receiver crash or are shut down incorrectly without first cancelling the reservation. The individual routers may, at their option, police the traffic to check that it conforms to the flow specs.


The problem with IntServ is that a lot of state information needs to be stored in each router. As a result, IntServ works on a small-scale, but as you scale up to a system the size of the Internet, it is difficult to keep track of all of the reservations. As a result, IntServ is not very popular in practice.

Differentiated Services (DiffServ)

DiffServ takes the opposite approach: it deals not with single flows and single reservations, but with bulk flows. That is, a single negotiation will be made for all of the flows from, for example, a single ISP, or a single university. The contracts resulting from these negotiations are called 'Service Level Agreements', and will inevitably involve money changing hands. These service level agreements will specify what classes of traffic will be provided, what guarantees are needed for each class, and how much of each class will be paid for. They can also vary dynamically, so that the user pays only for what they use.

When traffic enters a 'DiffServ Cloud' - ie a collection of DiffServ routers - it is first classified by the sender: for example, in a university, lecturers may get the highest priority, followed by graduates, followed by undergraduates. These classes are used to set the 'type of service' field, in the IP header3. Within the DiffServ Cloud, all the individual routers need to do is to give highest priority to the packets with the highest value in the type of service field. This is considerably easier than what they would have to do in an IntServ system. Of course, all the traffic which enters the system will be policed and if there is so much of it that it breaches the service level agreement, then the sender may be liable for fines according to the details of the contract.

An example of the sorts of services that might be provided is the following: traffic is split into Premium, Assured, Gold, Silver, and Bronze classes. In each router, if there is any Premium traffic, then that is served first, followed by any Assured traffic, followed by Gold, Silver, and Bronze. If the router runs out of buffer space, then packets will need to be discarded and this is done in a weighted manner, so that Bronze is the most likely to go, and Gold is the least likely. Assured packets will only be discarded if there are no packets at lower levels in the queue, and similarly for Premium packets.

One advantage of DiffServ is that all the policing and classifying is done at the boundaries between DiffServ clouds. This means that in the core of the Internet, routers can get on with doing the job of routing, and not care about the complexities of collecting payment or enforcing agreements. The downside is that the details of how individual routers deal with the type of service field, that is the 'Per Hop Behaviour' is somewhat arbitrary, and it is difficult to predict the end-to-end behaviour. This is complicated when a packet crosses two or more DiffServ clouds.

1In this context, telephony is the transmission of sound between two computers using the Internet.2Latency is the time between the information being sent and it being received. Latency is therefore much higher by snail-mail than email, for example.3IP stands for Internet Protocol - the protocol used for most of the Internet. The header is a bit which goes at the top of each data packet to say where it needs to go, where it came from, and so on.

Bookmark on your Personal Space

Conversations About This Entry

There are no Conversations for this Entry

Edited Entry


Infinite Improbability Drive

Infinite Improbability Drive

Read a random Edited Entry

Categorised In:

Written by

Write an Entry

"The Hitchhiker's Guide to the Galaxy is a wholly remarkable book. It has been compiled and recompiled many times and under many different editorships. It contains contributions from countless numbers of travellers and researchers."

Write an entry
Read more