Computer Networks – Switching, Delays and Performance
December 30, 2009 Leave a comment
Switching, Delays and Performance
A network is a mesh of routers connected, There are 2 basic ways of transferring packets through a network:
- Circuit switching:
This can go through a channel (aka fixed path), or dedicated resources.
- Packet switching:
The data is transmitted in chunks when resources are available
- The resources at the ‘ends’ of the network are reserved for transfer
- Resources are dedicated, this means that:
- There is no sharing
- All bandwidth goes to the link
- This allows the network to act as a circuit, like a piece of wire or something
- You get guaranteed performance from this.
- But setup is required to establish and configure the path
- Basically it’s a computer version of 2 cups connected by string
- The resources on the network are divided into pieces, normally of a fixed size.
- The number of pieces is fixed when the network is created
- There are bits reserved for end to end transfers
- If a piece is not being used by owning a transfer, it is idle
- The bandwidth of a link is divided into pieces using:
- frequency division multiplexing (FDM), and
- time division multiplexing (TDM)
Each user gets a continuous low volume of data, so for example each user has 20% of the bandwidth
Each user gets all the the bandwidth for a burst of time, then has to wait for their turn again
Each data stream (end to end) is divided into packets. The packets share the resources of the network and are used as needed. Therefore each packet uses the full link bandwidth.
Store and forwarding means that each packet is completely received before being forwarded to the next resource, and move one ‘hop’ at a time
- Packets are interleaved within the network
- Packets can often arrive faster than they are sent out, so a buffer is used to store them
- If the buffer becomes full, packets are dropped.
- The aim is to get a low probability of packet dropping
Packet switching vs circuit switching
Now, I like packet switching, and I also like circuit switching.. But which one is best?
Packet switching allows more users to use the network so it’s better mostly, but it depends:
- Packet switching is good for data that comes in bursts because resources are shared
- But when congestion is high, packets get lost.
- So for data with a high throughput and where latency must be low, circuit switching is better.
- The internet uses packet switching
Delay, loss and throughput
- Data doesn’t arrive instantly and some never arrives
- There are limits on how much can be sent per second
- If we say that a link can transmit at R bps
- And a packet is L bits big
- The delay in transmission = L/R (size/rate)
- The time it takes for a packet to travel is the distance in metres divided by the speed of the medium it’s travelling in, in m/s
- Delay comes from delays in the individual nodes, and from waiting for access to a shared resource
- We use RTT (Round Trip Time) to measure it, this is the time for a packet to travel to the destination and back
- If there are no delays in queueing, then you can predict the dominant delay. If it is small, like 1 byte, then it will be the propagation delay.
- If it is large, like 25mb on a 10mbps link, it is the transfer rate
If you run the traceroute command on a computer, your computer will send 3 packets to each point on the path to the destination, and report back the RTT.
This is the rate at which data is transferred.
For a large transfer this is determined by 3 things: The remote computer’s link, your (local) link, and the network
1kbps is 1ms per bit
1mbps is 1 microsecond per bit