Today we’re going to talk about Layer 4 of the OSI model, also known as the Transport Layer. The transport layer is responsible for establishing either “connectionless” or “connection oriented” conversations between two nodes. It is also responsible for flow control, congestion avoidance, and error recovery.
A connectionless connection is analogous to sending a letter via First Class mail. You drop a letter off at the post office with the trust that they will be able to deliver it to the destination, but you are not given any other automatic notification about the status of the letter or whether or not it arrived. UDP is most often used for these types of connections. The header of a UDP packet contains the bare minimum amount of information to save on bandwidth. UDP is generally the protocol used in transmitting voice and video across a network. This is because there is no time to re-send lost packets when listening to someone or watching a video in real time. Error correction (if any is to be used) depends on higher layers to detect and send requests for retransmission.
It is important to note that while two computers can talk back and forth to each other using UDP, they are still working in a connection-less fashion.
A connection-oriented protocol such as TCP use many more tools to ensure packet delivery, and is analogous to sending via Certified Mail. With certified mail, you are given a receipt that says when a packet has successfully arrived at it’s destination. The header of a TCP packet contains much more information for services that operate at Layer 4, like flow control for instance.
The first thing that happens with TCP is a three-way handshake. One computer says to the destination computer (via a SYN packet), “Can I talk to you?” The destination computer replies back with a SYN/ACK packet, “Can I talk to you? And yeah, you can talk to me.” Finally the first computer replies to the second computers question with another ACK packet that says, “Yes, you can talk to me.” The purpose of this is to ensure bi-directional communication is possible between the two nodes. A conversation is closed with one FIN packet and one ACK packet being sent to the receiving end, and the receiving end responding with an ACK packet.
Let’s say the first computer (we’ll call N1) is sending three packets of data to N2. Each of these packets will have a sequence number assigned to them by Layer 4, correlated to the number of bytes sent in each packet. Let’s say first packet arrives at the destination with 1024 bytes of data. The sequence number for this packet will be 1024. The receiving end will then send back an ACK packet with an incremented acknowledgment number based upon the sequence number of the last packet it received. So the ACK number it sends back in this case is 1025. To the sender, an ACK number of 1025 means “I have received all bytes before 1025 and I expect my next packet to be seq 1025″. This is called Forward Acknowledgment, and the receiving end could send back ACK packets for every packet it receives. But that would eat up a lot of bandwidth.
So what we have to help keep overhead down is something called a delayed acknowledgment. An example of a delayed ACK is when the three packets arrived at the receiving end, and the receiving end responds with one ACK 3073 packet (1024 + 1024 + 1024 + 1). This tells the receiving computer, I’ve gotten all data up to byte 3072, I expect 3073 to be next.” The number of packets that can be received before an ACK packet is sent back to the sender can vary due to things like congestion windowing, explained below.
Now lets say that for some reason, the network device on the receiving end was so busy, it could only handle the first 2 of three incoming packets. Since the buffer is now full, and it cannot receive more data for now, it will send back an ACK of 2049 (1024 + 1024 + 1) with a Window size of zero. This basically tells the sending end to wait until it gets another ACK 2049 with a Window greater than 0. A Window is a value set in the TCP header and it is indicative of the receiving ends’ data acceptance capabilities.
N2 isn’t the only device that may become overwhelmed by incoming traffic. Routers in between the two computers may also be choked by heavy traffic, and cause buffers to become full, and packets to be dropped as a result. A process called slow-start is used to help “feel the network out”. Basically, the sending side starts with one packet, gets an ACK, then sends twice the data it did last time, gets the ACK for that, doubles the data again, and this goes on and on until the ACK the sender gets back indicates that the capabilities of the networking equipment from end to end have been maxed out. This is why, when you start downloading a file from a web server, it typically starts of slow for a second or two, climbs higher, until reaching some sort of average data rate.
The TCP header also utilizes length and checksum fields. If there is a discrepancy between data being sent and either the length or the checksum, the bad packet can be resent by decrementing the ACK number back to the last known good packet. The sender won’t know the packet was corrupted. It will just think it never got there in the first place and resend the bad packet.
Sequence numbers are also used to reorder data back together for the higher layers.
I’m sure I’m leaving something out, but the important thing is that you understand the difference between connectionless and connection-oriented protocols. Future CCNA posts will concentrate on a great deal on the Cisco Internetwork Operating System. Fun fun!
Saturday, April 12th, 2008