Computer Networks – Multimedia
January 11, 2010 Leave a comment
These are my notes… RAR! If there’s any issues give me a buzz because some stuff might not be clear enough.
Most multimedia is transmitted via TCP or UDP. It may also be transmitted via RTP and RTCP, but they’re just protocols built on UDP – in fact, sometimes they’re classified as being part of the application layer. Anyway, let’s find out what all of this means…
Transmission Control Protocol (TCP)
Sets up a connection before use and tears it down afterward. Connections are useful for maintaining state. There is a delay in setting up the connection.
Operates using a byte stream, which are fragmented into packets not samples. The sound that is to be transmitted may be broken up and then sent, which may then be received in a different order and have to be re-arranged.
If there’s an error (such as a packet being lost) a retransmission will take place – this builds up further delay.
In TCP the sequence numbers are limited to 32 bits, so the sequence number may go round the count – not often an issue but it can cause problems.
TCP assumes that there must be congestion within the network if packets are being lost or arriving late. It can slow down transfer speed using the window size (which may even be set to zero).
User Datagram Protocol (UDP)
Has no concept of a connection, therefore it’s up to the application to keep track of everything.
A packet stream instead of a byte stream. The application can now decide how much information should be put into a packet and when the packets are sent.
Fire and forget.
Packets can overtake each other. The delays will vary. If packets arrives too late:
Play silence – works quite well if this is very rare.
Relay the previous packets.
Try and predict what’s coming next.
If a packet is missing but the next one is here, just play the next one.
Or use another protocol…
For media you need quality of service, so we want the “best effort” to get this.
The internet gives no promises…
Sometimes the messages don’t get delivered properly.
Causes of packet delay:
Encoding, sampling, packetising.
Queues and scheduling at the router.
Decoding, de-packetising etc.
Multimedia is delay sensitive, so we care about the best effort.
Jitter – the variability of the packets across the link.
Delay = Difference between the time sent and the time received.
Jitter = The difference between the delay for the current pack and the previous one.
You can average the delay an the jitter over a period of time.
Loss tolerant: infrequent losses cause minor glitches.
Opposite of data, which is loss intolerant but delay tolerant.
Real-Time Transport Protocol (RTP)
RTP specifies the packet structure for packets carrying audio or video data.
Each RTP packet provides a payload type identification (e.g. MP3, Mov), a packet sequence number (32 bit number, which will be counted much more slowly then counting data bytes as in TCP) and a time stamp (when the delays vary, the receiver can put them into the correct order).
RTP runs in end systems (not in between the systems) and is an application layer protocol, but is transport oriented. An RTP packet is normally an encapsulation of a UDP packet but they don’t have to be.
RTP does not provide any mechanisms to ensure timely delivery or quality of service. No control over the network in-between.
Payload type (7 bits) – indicates the type of encoding being used.
Sequence number (16 bits) – identifies the ordered position of the packet so as the receiver can put packets in order if they fall out of order.
Time stamp (32 bytes long) – sampling of the first byte in this data packet.
SSRC (32 bits long) – identifies the source of the RTP stream. For example, using sound and video would have two unique SSRC numbers.
Real-Time Control Protocol (RTCP)
Each participate in RTP will periodically send RTCP information to other participants.
Each RTCP packet will send information about the jitter, the loss rate etc. etc. The sender can then adapt to what’s going on.
The sender sends RTP and RTCP to the internet, which goes to a number of receivers. The receivers then send control packets back to the sender.
Receiver report packets include the fraction of packets lost, the last sequence number, the average inter-arrival jitter.
The sender report packets include the SSRC of the RTP stream (this is the ID it’s using), the current time, the number of packets sent and the number of bytes sent.
Source description packets include the email address of the sender, the sender’s name, the SSRC of the associated RTP stream – the aim of which is to provide a mapping between the SSRC and the user/host name.
We can now synchronise streams.
RTCP attempts to limit its traffic to 5% of the session bandwidth. RTCP gives 25% of the speed to senders, and 75% to the receivers.
Recovery From Lost Packets
This can be achieved using a variety of techniques.
Lost packets in the media world are either packets that really were lost, or were received too late.
If we’re expecting a packet and it doesn’t arrive, we can send a NACK. This can be much more efficient.
If the packet just arrives late, then multiple copies may be received. If a packet has the same sequence number then throw it away.
Retransmission delays can be very large.
Detect errors with CRC checksums (optionally used in UDP).
Forward Error Correction
Simple Scheme – For every group of n chunks of data, send out n + 1 chunks. The additional chunk is the XOR or the original n chunks. If data is lost then the XOR can be used to work out which bits should be there. What we haven’t worked out is how multiple bits can be XORed together, but whatever – so long as the sender and the receiver is working it out the same way then the missing bits from a packet can be logically calculated. This only works for when one packet is lost.
However, this adds to the play-out delay because you’re now sending out +1. It only works if you’re missing one piece of data.
Another idea is to send two versions of the same media, a compressed lower quality and a higher quality.
Interleaving – Divide up the data and split it up to n packets and then interleave the information. If a packet is lost then n/data information is lost across the data so it’s not as noticeable. Edit: Yea… let me know if this isn’t clear enough… I’m sure it’s not.
Providing Quality of Service
Trying to find the best service over the resources you have. Packets are divided into different classes and isolated, the classes are then allocated resources. Fixed, non-sharable bandwidth are then allocated to the classes. At a router, packets can arrive in any order – they go into a queue. If the queue becomes full then packets have to be dropped. Particular classes of router can have a higher priority for their packets to be dropped.
Prioritising – assigning priorities using classes to different routes. Classes with a higher priority will be forwarded first. This may not be fair on some classes.
Round robin – going round all the classes and forwarding a packet from each. However, if there’s congestion then classes such as Voice Data may arrive too late.
Weighted Fair Queue – different classes of data coming in are divided into a different set of queues, so they each get a fixed proportion of the bandwidth.
Traffic arrives in bursts. The aim of a policing mechanism is to limit the traffic to three set parameters:
1. Long Term Average Rate – the number of packets that can be sent per time unit.
2. Peak Rate – the maximum number of packets that can be sent at one time in packets per minute. This must support the long term average rate above.
3. Maximum Burst Size – the maximum number of consequentially sent packets.
A token bucket is used to throttle and limit the burst size and average rate.
Tokens are added to the bucket periodically. In order for a packet to pass through the router it must obtain a token from the bucket.
This consequently means that if too few data is provided then the token bucket will fill up with tokens at a dynamically changeable rate, and if too much data is provided then the token bucket will be emptied of tokens. If there are no tokens then the data has to wait for tokens to become available, if there are lots of tokens then a burst of data that is received is just forwarded through.