On and off-campus students are taking the same exam




Yüklə 26.01 Kb.
tarix29.02.2016
ölçüsü26.01 Kb.
MIDETERM EXAM WITH SOLUTIONS
Midterm Exam

CMPSCI 591 and 453: Computer Networks

Spring 2005

Prof. Jim Kurose

ON AND OFF-CAMPUS STUDENTS ARE TAKING THE SAME EXAM

Instructions:




  • Please use two exam blue books – answer questions 1, 2 in one book, and the remaining two questions in the second blue book.

  • Put your name and student number on the exam books NOW!

  • The exam is closed book.

  • You have 85 minutes to complete the exam. Be a smart exam taker - if you get stuck on one problem go on to another problem. Also, don't waste your time giving irrelevant (or not requested) details.

  • The total number of points for each question is given in parenthesis. There are 100 points total. An approximate amount of time that would be reasonable to spend on each question is also given; if you follow the suggested time guidelines, you should finish with 10 minutes to spare. The exam is 85 minutes long.

  • Show all your work. Partial credit is possible for an answer, but only if you show the intermediate steps in obtaining the answer.

  • Good luck.


Question 1: ``Quickies'' (24 points, 20 minutes)
Answer each of the following questions briefly, i.e., in at most a few sentences.


  1. Suppose all of the network sources send data at a constant bit rate. Would packet-switching or circuit-switching be more desirable in this case? Why? (Answer: circuit-switching is more desirable here as there are no statistical multiplexing gains to be had, and by using circuits, each connection will get a constant amount of bandwidth that matched its CBR rate. On the other hand, circuit-switching has more overhead in terms of signaling, so there is an argument that packet-switching is better here since there is no call setup overhead. I’ll take either answer). Now suppose that all of the network sources are bursty – that they only occasionally have data to send. Would packet-switching or circuit switching be more desirable in this case? Why? (Answer: Packet-switching is better here because there are statistical multiplexing gains – when a source does not have data to send, it will not be allocated bandwidth (that would be idle). Hence this bandwidth is available for use by other sources).

  2. Describe the use of the “If-Modified-Since” header in the HTTP protocol. (Answer: When a web client or web cache has a copy of previously requested document, its GET request to the server includes an If-modified-Since line that gives the time at which the browser/cache received the copy of the document. If the document has not been modified at the web server since this time, the web server need not (and will not) send a duplicate copy of the document).

  3. What does it mean when we say that control messages are “in-band”? (Answer: It means that control message and data messages may be interleaved with each other on the same connection. Indeed a single message may contain both control information and data). What does it mean when we say that control messages are “out-of-band”? (Answer: It means that control and data messages are carried on separate connections) Give an example of a protocol that has in-band control messages (Answer: examples: HTTP, DNS, TCP, SMTP) and one example of a protocol that has out-of-band control messages (Answer: FTP)

  4. Consider a TCP connection between hosts A and B. Suppose that the TCP segments from A to B have source port number x and destination port number y. What are the source and destination port numbers for the segments traveling from B to A? (Answer: source port is y, destination port is x).

  5. What is the purpose of the connection-oriented welcoming socket, which the server uses to perform an accept()? Once the accept() is done, does the server use the welcoming socket to communicate back to the client? Explain. (Answer: a connection oriented server waits on the welcoming socket for an incoming connection request. When that connection request arrives a new socket is created at the server for communication back to that client).

  6. Suppose a web server has 1000 ongoing TCP connections. How many server-side sockets are used? How many server-side port numbers are used? Briefly (two sentences at most each) explain your answer. (Answer: If there are 1000 ongoing connections, and nothing else happening on the server, there will 1001 sockets in use – the single welcoming socket and the 1000 sockets in use for server-to-client communication. The ONLY server-sideport number in use at the server will be the single port number associated with the welcoming socket, e.g., port 80 on a web server).



Question 2: A reliable data transfer protocol (26 points, 25 minutes)
Consider a scenario in which a Host A wants to simultaneously send messages to Hosts B and C. A is connected to B and C via a broadcast channel – a packet sent by A (e.g., in a single udt_send() operation) is carried by the channel to both B and C. Suppose the broadcast channel connecting A, B, and C

  • can independently lose and corrupt messages from A to B and C (and so, for example, a message sent by A might be correctly received at B but not at C)

  • has a maximum bounded delay of D (i.e., if a message is sent by A, it will either be lost or arrive at B and/or C within D time units).

  • any control messages (e.g., an ACK or NAK) sent by B or C to A will only be received by A but can be lost or corrupted

Design a stop-and-wait-like error-control protocol for reliably transferring a packet from A to B and C, such that A will not get new data from the upper layer until it knows that both B and C have correctly received the current packet. Give a FSM description for A and C (assuming the FSM for B is similar, if it is not similar give the FSM for B as well). Also, give a description of the packet format used.
Solution:
This problem is a variation on the simple stop and wait protocol (rdt3.0). Because the channel may lose messages and because the sender may resend a message that one of the receivers has already received (either because of a premature timeout or because the other

receiver has yet to receive the data correctly), sequence numbers are needed. As in rdt3.0, a 0-bit sequence number will suffice here. Note that the receivers need to identify themselves in their ACK so that the sender will know which receiver sent the ACK, so that it can make sure that it has received ACKs from both receivers.


The sender and receiver FSM are shown in the figure below (note: I do not expect you to have come up with a solution at the level of syntactic detail shown below!). In this problem, the sender state indicates whether the sender has received an ACK from B (only), from C (only) or from neither C nor B. The receiver state indicates which sequence number the receiver is waiting for.
The packet formats are:





Problem 3: Error control potpourri (24 points, 20 minutes)



  1. What is the purpose/use of the UDP checksum? (Answer: to detect bit error, i.e., flipped bits, in the UDP segment).




  1. Consider the Go-Back-N protocol. Suppose that the size of the sequence number space (number of unique sequence numbers) is N, and the window size is N. Show (give a timeline trace showing the sender, receiver and the messages they exchange over time) that the Go-Back-N protocol will not work correctly in this case. (Answer: suppose that the sequence number space is 0,1 and N=2, i.e., that two messages can be transmitted but not-yet-acknowledged. The timeline below shows an error that can occur)





  1. Consider the Go-Back-N protocol, and suppose that base of the senders window is X. Is it possible for the sender to receive an ACK for a packet that has a smaller sequence number than X. If so, sketch out a timeline-diagram showing sender and receiver messages sent and received that shows how this is possible. .If not, explain why this can never happen.



Question 4: Caching and delays (26 points, 20 minutes)
Consider the networks shown in the figure below. There are two user machines m1.a.com and m2.a.com in the network a.com. Suppose the user at m1.a.com types in the URL www.b.com/bigfile.htm into a browser to retrieve a 1Gbit (1000 Mbit) file from www.b.com.
4.1. List the sequence of DNS and HTTP messages sent/received from/by m1.a.com as well as any other messages that leave/enter the a.com network that are not directly sent/received by m1.a.com from the point that the URL is entered into the browser until the file is completely received. Indicate the source and destination of each message. You can assume that every HTTP request by m1.a.com is first directed to the HTTP cache in a.com and that the cache is initially empty, and that all DNS requests are iterated queries.

  • M1.a.com needs to resolve the name www.b.com to an IP address so it sends a DNS REQUESTmessage to its local DNS resolver (this takes no time given the assumptions below)

  • Local DNS server does not have any information so it contacts a root DNS server with a REQUEST message (this take 500 ms given the assumptions below)

  • Root DNS server returns name of DNS Top Level Domain server for .com (this takes 500 ms given the assumptions below)

  • Local DNS server contacts .com TLD (this take 500 ms given the assumptions below)

  • TLD .com server returns authoritative name server for b.com (this takes 500 ms given the assumptions below)

  • Local DNS server contacts authoritative name server for b.com (this takes 100 ms given the assumptions below)

  • Authoritative name server for b.com returns IP address of www.b1.com. (this takes 100 ms given the assumptions below)

  • HTTP client sends HTTP GET message to www.b1.com, which it sends to the HTTP cache in the a.com network (this takes no time given the assumptions).

  • The HTTP cache does not find the requested document in its cache, so it sends the GET request to www.b.com. (this takes 100 ms given the assumptions below)

  • www.b.com receives the GE request. There is a 1 sec transmission delay to send the 1Gbps file from www.b.com to R2. If we assume that as soon as the first few bits of the file arrive at R1, that they are forwarded on the 1Mbps R2-to-R1 link, then this delay can be ignored.

  • The 1 Gbit file (in smaller packets or in a big chunk, that’s not important here) is transmitted over the 1 Mbps link between R2 and R1. This takes 1000 seconds. There is an additional 100 ms propagation delay.

  • There is a 1 sec delay to send the 1Gbps file from R1 to the HTTP cache. If we assume that as soon as the first few bits of the file arrive at the cache, that they are forwarded to the cache, then this delay can be ignored.

  • There is a 1 sec delay to send the 1Gbps file from the HTTP cache to m1.a.com. If we assume that as soon as the first few bits of the file arrive at the cache, that they are forwarded to the cache, then this delay can be ignored.

  • The total delay is thus: .5 + .5 + .5 +.5 +.1 + .1 + 1 + 1000 +1+1 = 1105.2 secs (1002.2 is also an OK answer).


4.2. How much time does it take to accomplish the steps you outlined in your answer to 4.1? Explain how you arrived at this answer. In answering this question, you can make the following assumptions

  • The packets containing any DNS commands and HTTP commands such as GET are very small compared to the size of the file, and thus their transmission times (but not their propagation times) can be neglected.

  • Propagation delays within the LAN are small enough to be ignored. The propagation from router R1 to router R2 is 100 ms.

  • The propagation delay from anywhere in a.com to any other site in the Internet (except b.com) is 500 ms.

(See above for answer. Note that we have neglected to account for TCP hand-shaking delays for the HTTP exchanges!)
4.3. Now assume that machine m2.a.com makes a request to exactly the same URL that m1.a.com made. List the sequence of DNS and HTTP messages sent/received from/by m2.a.com as well as any other messages that leave/enter the a.com network that are not directly sent/received by m2.a.com from the point that the URL is entered into the browser until the file is completely received. Indicate the source and destination of each message. [Hint: make sure you consider caching here]

  • m2.a.com needs to resolve the name www.b.com to an IP address so it sends a DNS REQUEST message to its local DNS resolver (this takes no time given the assumptions above)

  • The local DNS server looks in its cache and finds the IP address for www.b.com, since m1.a.com had just requested that that name be resolved, and returns the IP address to m2.b.com. (this takes no time given the assumptions above)

  • HTTP client at m2.a.com sends HTTP GET message to www.b1.com, which it sends to the HTTP cache in the a.com network (this takes no time given the assumptions).

  • The HTTP cache finds the requested document in its cache, so it sends a GET request with an If-Modified-Since to to www.b.com. (this takes 100 ms given the assumptions)

  • www.b.com receives the GET request. The document has not changed, so www.b.com sends a short HTTP REPLY message to the HTTP cache in a.com indicating that the cached copy is valid. (this takes 100 ms given the assumptions)

  • There is a 1 sec delay to send the 1Gbps file from the HTTP cache to m2.a.com.

  • The total delay is thus: .1 + .1 + 1 = 1.2 secs

4.4. How much time does it take to accomplish the steps that you outlined in your answer to 4.3? (Answer: see above)

4
.5.
Now suppose there is no HTTP cache in network a.com. What is the maximum rate at which machines in a.com can make requests for the file www.b.com/bigfile.htm while keeping the time from when a request is made to when it is satisfied non-infinite in the long run? (Answer: since it takes 1000 secs to send the file from R2tro R1, the maximum rate at which requests to send the file from b.com to a.com is 1 request every 1000 seconds, or an arrival rate of .001 requests/sec.)


Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©azrefs.org 2016
rəhbərliyinə müraciət

    Ana səhifə