Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

A Systems Approach Fifth Edition Solutions Manual, Study notes of Computer Science

of Peterson and Davie's Computer Networks: A Systems Approach. ... Solutions for Chapter 1. 3. We will count the transfer as completed when the last data ...

Typology: Study notes

2021/2022

Uploaded on 08/05/2022

dirk88
dirk88 🇧🇪

4.5

(206)

3.2K documents

1 / 122

Toggle sidebar

Related documents


Partial preview of the text

Download A Systems Approach Fifth Edition Solutions Manual and more Study notes Computer Science in PDF only on Docsity! Computer Networks: A Systems Approach Fifth Edition Solutions Manual Larry Peterson and Bruce Davie 2011 1 Dear Instructor: This Instructors’ Manual contains solutions to most of the exercises in the fifth edition of Peterson and Davie’s Computer Networks: A Systems Approach. Exercises are sorted (roughly) by section, not difficulty. While some exercises are more difficult than others, none are intended to be fiendishly tricky. A few exercises (notably, though not exclusively, the ones that involve calculating simple probabilities) require a modest amount of mathematical background; most do not. There is a sidebar summarizing much of the applicable basic probability theory in Chapter 2. An occasional exercise is awkwardly or ambiguously worded in the text. This manual sometimes suggests better versions; also see the errata at the web site. Where appropriate, relevant supplemental files for these solutions (e.g. programs) have been placed on the textbook web site, http://mkp.com/computer-networks. Useful other material can also be found there, such as errata, sample programming assignments, PowerPoint lecture slides, and EPS figures. If you have any questions about these support materials, please contact your Morgan Kaufmann sales representative. If you would like to contribute your own teaching materials to this site, please contact our Associate Editor David Bevans, D.Bevans@elsevier.com. We welcome bug reports and suggestions as to improvements for both the exercises and the solutions; these may be sent to netbugsPD5e@elsevier.com. Larry Peterson Bruce Davie March, 2011 Chapter 1 3 (d) We require at least one RTT from sending the request before the first bit of the picture could begin arriving at the ground (TCP would take longer). 25 MB is 200Mb. Assuming bandwidth delay only, it would then take 200Mb/1000Mbps = 0.2 seconds to finish sending, for a total time of 0.2+ 2.57 = 2.77 sec until the last picture bit arrives on earth. 14. The answer is in the book. 15. (a) Delay-sensitive; the messages exchanged are short. (b) Bandwidth-sensitive, particularly for large files. (Technically this does pre- sume that the underlying protocol uses a large message size or window size; stop-and-wait transmission (as in Section 2.5 of the text) with a small mes- sage size would be delay-sensitive.) (c) Delay-sensitive; directories are typically of modest size. (d) Delay-sensitive; a file’s attributes are typically much smaller than the file itself. 16. (a) On a 100 Mbps network, each bit takes 1/108 = 10 ns to transmit. One packet consists of 12000 bits, and so is delayed due to bandwidth (serial- ization) by 120 µs along each link. The packet is also delayed 10 µs on each of the two links due to propagation delay, for a total of 260 µs. (b) With three switches and four links, the delay is 4 × 120µs + 4 × 10µs = 520µs (c) With cut-through, the switch delays the packet by 200 bits = 2 µs. There is still one 120 µs delay waiting for the last bit, and 20 µs of propagation delay, so the total is 142 µs. To put it another way, the last bit still arrives 120 µs after the first bit; the first bit now faces two link delays and one switch delay but never has to wait for the last bit along the way. 17. The answer is in the book. 18. (a) The effective bandwidth is 100 Mbps; the sender can send data steadily at this rate and the switches simply stream it along the pipeline. We are assuming here that no ACKs are sent, and that the switches can keep up and can buffer at least one packet. (b) The data packet takes 520 µs as in 16(b) above to be delivered; the 400 bit ACKs take 4 µs/link to be sent back, plus propagation, for a total of 4×4 µs +4 × 10 µs = 56 µs; thus the total RTT is 576 µs. 12000 bits in 576 µs is about 20.8 Mbps. (c) 100×4.7×109 bytes / 12 hours = 4.7×1011 bytes/(12×3600 s)≈ 10.9 MBps = 87 Mbps. 19. (a) 100×106bps × 10 × 10−6 sec = 1000 bits = 125 bytes. Chapter 1 4 (b) The first-bit delay is 520 µs through the store-and-forward switch, as in 16(a). 100 × 106bps × 520×10−6 sec = 52000 bits = 650 bytes. (c) 1.5×106 bps × 50 × 10−3 sec = 75,000 bits = 9375 bytes. (d) The path is through a satellite, i.e. between two ground stations, not to a satellite; this ground-to-satellite-to-ground path makes the total one-way travel distance 2×35,900,000 meters. With a propagation speed of c = 3×108 meters/sec, the one-way propagation delay is thus 2×35,900,000/c = 0.24 sec. Bandwidth×delay is thus 1.5 × 106 bps × 0.24 sec = 360,000 bits ≈ 45 KBytes 20. (a) Per-link transmit delay is 104 bits / 108 bps = 100 µs. Total transmission time including link and switch propagation delays = 2×100+2×20+35 = 275 µs. (b) When sending as two packets, the time to transmit one packet is cut in half. Here is a table of times for various events: T=0 start T=50 A finishes sending packet 1, starts packet 2 T=70 packet 1 finishes arriving at S T=105 packet 1 departs for B T=100 A finishes sending packet 2 T=155 packet 2 departs for B T=175 bit 1 of packet 2 arrives at B T=225 last bit of packet 2 arrives at B This is smaller than the answer to part (a) because packet 1 starts to make its way through the switch while packet 2 is still being transmitted on the first link, effectively getting a 50 µs head start. Smaller is faster, here. 21. (a) Without compression the total time is 1 MB/bandwidth. When we com- press the file, the total time is compression time + compressed size/bandwidth Equating these and rearranging, we get bandwidth = compression size reduction/compression time = 0.5 MB/1 sec = 0.5 MB/sec for the first case, = 0.6 MB/2 sec = 0.3 MB/sec for the second case. (b) Latency doesn’t affect the answer because it would affect the compressed and uncompressed transmission equally. 22. The number of packets needed, N , is ⌈106/D⌉, where D is the packet data size. Given that overhead = 50×N and loss = D (we have already counted the lost packet’s header in the overhead), we have overhead+loss = 50 × ⌈106/D⌉ + D. D overhead+loss 1000 51000 10000 15000 20000 22500 Chapter 1 5 The optimal size is 10,000 bytes which minimizes the above function. 23. Comparison of circuits and packets result as follows : (a) Circuits pay an up-front penalty of 1024 bytes being sent on one round trip for a total data count of 2048 + n, whereas packets pay an ongoing per packet cost of 24 bytes for a total count of 1024×n/1000. So the question really asks how many packet headers does it take to exceed 2048 bytes, which is 86. Thus for files 86,000 bytes or longer, using packets results in more total data sent on the wire. (b) The total transfer latency for packets is the sum of the transmit delays, where the per-packet transmit time t is the packet size over the bandwidth b (8192/b), introduced by each of s switches (s× t), total propagation delay for the links ((s + 2)× 0.002), the per packet processing delays introduced by each switch (s×0.001), and the transmit delay for all the packets, where the total packet count c is n/1000, at the source (c × t). Resulting in a total latency of (8192s/b) + 0.003s + 0.004 + (8.192n/b) = (0.02924 + 0.000002048n) seconds. The total latency for circuits is the transmit delay for the whole file (8n/b), the total propagation delay for the links, and the setup cost for the circuit which is just like sending one packet each way on the path. Solving the resulting inequality 0.02924 + 8.192(n/b) > 0.076576+ 8(n/b) for n shows that circuits achieve a lower delay for files larger than or equal to 987,000 B. (c) Only the payload to overhead ratio size effects the number of bits sent, and there the relationship is simple. The following table show the latency results of varying the parameters by solving for the n where circuits become faster, as above. This table does not show how rapidly the performance diverges; for varying p it can be significant. s b p pivotal n 5 4 Mbps 1000 987000 6 4 Mbps 1000 1133000 7 4 Mbps 1000 1280000 8 4 Mbps 1000 1427000 9 4 Mbps 1000 1574000 10 4 Mbps 1000 1721000 5 1 Mbps 1000 471000 5 2 Mbps 1000 643000 5 8 Mbps 1000 1674000 5 16 Mbps 1000 3049000 5 4 Mbps 512 24000 5 4 Mbps 768 72000 5 4 Mbps 1014 2400000 (d) Many responses are probably reasonable here. The model only considers the network implications, and does not take into account usage of process- ing or state storage capabilities on the switches. The model also ignores the presence of other traffic or of more complicated topologies. Chapter 1 8 (c) Latency = 5; zero jitter here: 1000 1005 1001 1006 1003 1008 we lost 1002 1004 1009 1005 1010 32. Generally, with MAX PENDING =1, one or two connections will be accepted and queued; that is, the data won’t be delivered to the server. The others will be ignored; eventually they will time out. When the first client exits, any queued connections are processed. 34. Note that UDP accepts a packet of data from any source at any time; TCP re- quires an advance connection. Thus, two clients can now talk simultaneously; their messages will be interleaved on the server. Solutions for Chapter 2 1. Clock Bits 1 0 0 1 1 1 1 1 0 0 0 1 0 0 0 1 NRZ Manchester NRZI 2. See the figure below. Bits 1 1 1 0 0 0 1 0 1 1 1 1 1 1 0 1 0 1 0 1 NRZI 3. The answer is in the book. 4. One can list all 5-bit sequences and count, but here is another approach: there are 23 sequences that start with 00, and 23 that end with 00. There are two sequences, 00000 and 00100, that do both. Thus, the number that do either is 8+8−2 = 14, and finally the number that do neither is 32 − 14 = 18. Thus there would have been enough 5-bit codes meeting the stronger requirement; however, additional codes are needed for control sequences. 5. The stuffed bits (zeros) are in bold: 1101 0111 1100 1011 1110 1010 1111 1011 0 6. The ∧ marks each position where a stuffed 0 bit was removed. There were no stuffing errors detectable by the receiver; the only such error the receiver could identify would be seven 1’s in a row. 1101 0111 11∧10 1111 1∧010 1111 1∧110 7. The answer is in the book. 8. ..., DLE, DLE, DLE, ETX, ETX 9. (a) X DLE Y, where X can be anything besides DLE and Y can be anything except DLE or ETX. In other words, each DLE must be followed by either DLE or ETX. (b) 0111 1111. 9 10. (a) After 48×8=384 bits we can be off by no more than ±1/2 bit, which is about 1 part in 800. (b) One frame is 810 bytes; at STS-1 51.8 Mbps speed we are sending 51.8×106/(8×810) = about 8000 frames/sec, or about 480,000 frames/minute. Thus, if station B’s clock ran faster than station A’s by one part in 480,000, A would accu- mulate about one extra frame per minute. 11. Suppose an undetectable three-bit error occurs. The three bad bits must be spread among one, two, or three rows. If these bits occupy two or three rows, then some row must have exactly one bad bit, which would be detected by the parity bit for that row. But if the three bits are all in one row, then that row must again have a parity error (as must each of the three columns containing the bad bits). 12. If we flip the bits corresponding to the corners of a rectangle in the 2-D layout of the data, then all parity bits will still be correct. Furthermore, if four bits change and no error is detected, then the bad bits must form a rectangle: in order for the error to go undetected, each row and column must have no errors or exactly two errors. 13. If we know only one bit is bad, then 2-D parity tells us which row and column it is in, and we can then flip it. If, however, two bits are bad in the same row, then the row parity remains correct, and all we can identify is the columns in which the bad bits occur. 14. We need to show that the 1’s-complement sum of two non-0x0000 numbers is non-0x0000. If no unsigned overflow occurs, then the sum is just the 2’s- complement sum and can’t be 0000 without overflow; in the absence of overflow, addition is monotonic. If overflow occurs, then the result is at least 0x0000 plus the addition of a carry bit, i.e. ≥0x0001. 15. Let’s define swap([A,B]) = [B,A], where A and B are one byte each. We only need to show [A, B] +’ [C, D] = swap([B, A] +’ [D, C]). If both (A+C) and (B+D) have no carry, the equation obviously holds. If A+C has a carry and B+D+1 does not, [A, B] +’ [C, D] = [(A+C) & 0xEF, B+D+1] swap([B, A] +’ [D, C]) = swap([B+D+1, (A+C) & 0xEF]) = [(A+C) & 0xEF, B+D+1] (The case where B+D+1 has also a carry is similar to the last case.) If B+D has a carry, and A+C+1 does not, [A, B] +’ [C, D] = [A+C+1, (B+D) & 0xEF]. swap([B, A] +’ [D, C]) = swap([(B+D) & 0xEF, A+C+1]) = [A+C+1, (B+D) & 0xEF]. 10 Chapter 2 13 110 101 011 100 1101 101 001 011 001 100 101 110 111 011 111 001 010 001 010 111 110 100 110 100 0 21. (a) M has eight elements; there are only four values for e, so there must be m1 and m2 in M with e(m1) = e(m2). Now if m1 is transmuted into m2 by a two-bit error, then the error-code e cannot detect this. (b) For a crude estimate, let M be the set of N -bit messages with four 1’s, and all the rest zeros. The size of M is (N choose 4) = N !/(4!(N − 4)!). Any element of M can be transmuted into any other by an 8-bit error. If we take N large enough that the size of M is bigger than 232, then as in part (a) there must for any 32-bit error code function e(m) be elements m1 and m2 of M with e(m1) = e(m2). To find a sufficiently large N , we note N !/(4!(N − 4)!) > (N − 3)4/24; it thus suffices to find N so (N − 3)4 > 24 × 232 ≈ 1011. N ≈ 600 works. Considerably smaller estimates are possible. 22. Assume a NAK is sent only when an out-of-order packet arrives. The receiver must now maintain a RESEND NAK timer in case the NAK, or the packed it NAK’ed, is lost. Unfortunately, if the sender sends a packet and is then idle for a while, and this packet is lost, the receiver has no way of noticing the loss. Either the sender must maintain a timeout anyway, requiring ACKs, or else some zero-data filler packets must be sent during idle times. Both are burdensome. Finally, at the end of the transmission a strict NAK-only strategy would leave the sender unsure about whether any packets got through. A final out-of-order filler packet, however, might solve this. 23. (a) Propagation delay = 40 × 103 m/(2 × 108 m/s) = 200 µs. (b) The roundtrip time would be about 400 µs. A plausible timeout time would be twice this, or 0.8 ms. Smaller values (but larger than 0.4 ms!) might be reasonable, depending on the amount of variation in actual RTTs. See Section 5.2.6 of the text. (c) The propagation-delay calculation does not consider processing delays that may be introduced by the remote node; it may not be able to answer imme- diately. 24. Bandwidth×(roundtrip)delay is about 125KBps × 2.5s = 312 KB, or 312 pack- ets. The window size should be this large; the sequence number space must cover twice this range, or up to 624. 10 bits are needed. Chapter 2 14 25. The answer is in the book. 26. If the receiver delays sending an ACK until buffer space is available, it risks de- laying so long that the sender times out unnecessarily and retransmits the frame. 27. For Fig 2.17(b) (lost frame), there are no changes from the diagram in the text. The next two figures correspond to the text’s Fig 2.17(c) and (d); (c) shows a lost ACK and (d) shows an early timeout. For (c), the receiver timeout is shown slightly greater than (for definiteness) twice the sender timeout. Sender Receiver Timeout Timeout Ignored Timeout Frame Frame ACK ACK Frame Ignored Sender Receiver Timeout duplicate frame; ignored; receiver still waits for timeout on Frame[N+1] Frame[N] Frame[N+1] ACK[N] Frame[N] Timeout for Frame[N+1] cancelled ACK[N+1] (c) (d) Here is the version of Fig 2.17(c) (lost ACK), showing a receiver timeout of approximately half the sender timeout. Chapter 2 15 Sender Receiver Frame[N] Frame[N+1] ACK ACK Timeout; receiver retransmits before sender times out Timeout cancelled ACK Yet another Timeout (possible, depending on exact timeout intervals) 28. (a) The duplications below continue until the end of the transmission. Sender Receiver Frame[1] ACK[2] Frame[1] ACK[2] Frame[2] Frame[2] ACK[1] ACK[1] Frame[3] Frame[3] ... original ACK response to duplicate frame original frame response to duplicate ACK original ACK response to duplicate frame original frame response to duplicate ACK original ACK response to duplicate frame (b) To trigger the sorcerer’s apprentice phenomenon, a duplicate data frame must cross somewhere in the network with the previous ACK for that frame. If both sender and receiver adopt a resend-on-timeout strategy, with the same timeout interval, and an ACK is lost, then both sender and receiver will indeed retransmit at about the same time. Whether these retransmis- sions are synchronized enough that they cross in the network depends on other factors; it helps to have some modest latency delay or else slow hosts. With the right conditions, however, the sorcerer’s apprentice phenomenon can be reliably reproduced. 29. The following is based on what TCP actually does: every ACK might (optionally Chapter 2 18 before ACK[N] can arrive later. As before, we let ACK[N] denote the acknowl- edgment of all data packets less than N. (a) If DATA[6] is in the receive window, then the earliest that window can be is DATA[4]-DATA[6]. This in turn implies ACK[4] was sent, and thus that DATA[1]-DATA[3] were received, and thus that DATA[0], by our initial remark, can no longer arrive. (b) If ACK[6] may be sent, then the lowest the sending window can be is DATA[3]..DATA[5]. This means that ACK[3] must have been received. Once an ACK is received, no smaller ACK can ever be received later. 35. (a) The smallest working value for MaxSeqNum is 8. It suffices to show that if DATA[8] is in the receive window, then DATA[0] can no longer arrive at the receiver. We have that DATA[8] in receive window ⇒ the earliest possible receive window is DATA[6]..DATA[8] ⇒ ACK[6] has been received ⇒ DATA[5] was delivered. But because SWS=5, all DATA[0]’s sent were sent before DATA[5] ⇒ by the no-out-of-order arrival hypothesis, DATA[0] can no longer arrive. (b) We show that if MaxSeqNum=7, then the receiver can be expecting DATA[7] and an old DATA[0] can still arrive. Because 7 and 0 are indistinguishable mod MaxSeqNum, the receiver cannot tell which actually arrived. We follow the strategy of Exercise 27. 1. Sender sends DATA[0]...DATA[4]. All arrive. 2. Receiver sends ACK[5] in response, but it is slow. The receive window is now DATA[5]..DATA[7]. 3. Sender times out and retransmits DATA[0]. The receiver accepts it as DATA[7]. (c) MaxSeqNum ≥ SWS + RWS. 36. (a) Note that this is the canonical SWS = bandwidth×delay case, with RTT = 4 sec. In the following we list the progress of one particular packet. At any given instant, there are four packets outstanding in various states. T=N Data[N] leaves A T=N+1 Data[N] arrives at R T=N+2 Data[N] arrives at B; ACK[N] leaves T=N+3 ACK[N] arrives at R T=N+4 ACK[N] arrives at A; DATA[N+4] leaves. Here is a specific timeline showing all packets in progress: Chapter 2 19 T=0 Data[0]...Data[3] ready; Data[0] sent T=1 Data[0] arrives at R; Data[1] sent T=2 Data[1] arrives at R; Data[0] arrives at B; ACK[0] starts back; Data[2] sent T=3 ACK[0] arrives at R; Data[2] arrives at R; Data[1] arrives at B; ACK[1] starts back; Data[3] sent T=4 ACK[0] arrives at A; ACK[1] arrives at R; Data[3] arrives at R; Data[2] arrives at B; ACK[2] starts back; Data[4] sent T=5 ACK[1] arrives at A; ACK[2] arrives at R; Data[4] arrives at R; Data[3] arrives at B; ACK[3] starts back; Data[5] sent (b) T=0 Data[0]...Data[3] sent T=1 Data[0]...Data[3] arrive at R T=2 Data arrive at B; ACK[0]...ACK[3] start back T=3 ACKs arrive at R T=4 ACKs arrive at A; Data[4]...Data[7] sent T=5 Data arrive at R 37. T=0 A sends frames 1-4. Frame[1] starts across the R–B link. Frames 2,3,4 are in R’s queue. T=1 Frame[1] arrives at B; ACK[1] starts back; Frame[2] leaves R. Frames 3,4 are in R’s queue. T=2 ACK[1] arrives at R and then A; A sends Frame[5] to R; Frame[2] arrives at B; B sends ACK[2] to R. R begins sending Frame[3]; frames 4,5 are in R’s queue. T=3 ACK[2] arrives at R and then A; A sends Frame[6] to R; Frame[3] arrives at B; B sends ACK[3] to R; R begins sending Frame[4]; frames 5,6 are in R’s queue. T=4 ACK[3] arrives at R and then A; A sends Frame[7] to R; Frame[4] arrives at B; B sends ACK[4] to R. R begins sending Frame[5]; frames 6,7 are in R’s queue. The steady-state queue size at R is two frames. 38. T=0 A sends frames 1-4. Frame[1] starts across the R–B link. Frame[2] is in R’s queue; frames 3 & 4 are lost. T=1 Frame[1] arrives at B; ACK[1] starts back; Frame[2] leaves R. T=2 ACK[1] arrives at R and then A; A sends Frame[5] to R. R immediately begins forwarding it to B. Frame[2] arrives at B; B sends ACK[2] to R. T=3 ACK[2] arrives at R and then A; A sends Frame[6] to R. R immediately begins forwarding it to B. Frame[5] (not 3) arrives at B; B sends no ACK. T=4 Frame[6] arrives at B; again, B sends no ACK. T=5 A TIMES OUT, and retransmits frames 3 and 4. R begins forwarding Frame[3] immediately, and enqueues 4. Chapter 2 20 T=6 Frame[3] arrives at B and ACK[3] begins its way back. R begins forwarding Frame[4]. T=7 Frame[4] arrives at B and ACK[6] begins its way back. ACK[3] reaches A and A then sends Frame[7]. R begins forwarding Frame[7]. 39. Hosts sharing the same address will be considered to be the same host by all other hosts. Unless the conflicting hosts coordinate the activities of their higher level protocols, it is likely that higher level protocol messages with otherwise identical demux information from both hosts will be interleaved and result in communication breakdown. 40. One-way delays: Coax: 1500m 6.49 µs link: 1000m 5.13 µs repeaters two 1.20 µs transceivers six 1.20 µs (two for each repeater, one for each station) drop cable 6×50m 1.54 µs Total: 15.56 µs The roundtrip delay is thus about 31.1 µs, or 311 bits. The “official” total is 464 bits, which when extended by 48 bits of jam signal exactly accounts for the 512-bit minimum packet size. The 1982 Digital-Intel-Xerox specification presents a delay budget (page 62 of that document) that totals 463.8 bit-times, leaving 20 nanoseconds for unforeseen contingencies. 41. A station must not only detect a remote signal, but for collision detection it must detect a remote signal while it itself is transmitting. This requires much higher remote-signal intensity. 42. (a) Assuming 48 bits of jam signal was still used, the minimum packet size would be 4640+48 bits = 586 bytes. (b) This packet size is considerably larger than many higher-level packet sizes, resulting in considerable wasted bandwidth. (c) The minimum packet size could be smaller if maximum collision domain diameter were reduced, and if sundry other tolerances were tightened up. 43. (a) A can choose kA=0 or 1; B can choose kB=0,1,2,3. A wins outright if (kA, kB) is among (0,1), (0,2), (0,3), (1,2), (1,3); there is a 5/8 chance of this. (b) Now we have kB among 0...7. If kA=0, there are 7 choices for kB that have A win; if kA=1 then there are 6 choices. All told the probability of A’s winning outright is 13/16. Chapter 2 23 T=0: hosts A,B,C,D,E all transmit and collide. Backoff times are chosen by a single coin flip; we happened to get kA=1, kB=0, kC=0, kD=1, kE=1. At the end of this first collision, T is now 1. B and C retransmit at T=1; the others wait until T=2. T=1: hosts B and C transmit, immediately after the end of the first collision, and collide again. This time two coin flips are needed for each backoff; we happened to get kB = 00 = 0, kC = 11 = 3. At this point T is now 2; B will thus attempt again at T=2+0=2; C will attempt again at T=2+3=5. T=2: hosts A,B,D,E attempt. B chooses a three-bit backoff time as it is on its third collision, while the others choose two-bit times. We got kA = 10 = 2, kB = 010 = 2, kD = 01 = 1, kE = 11 = 3. We add each k to T=3 to get the respective retransmission-attempt times: T=5,5,4,6. T=3: Nothing happens. T=4: Station D is the only one to attempt transmission; it successfully seizes the channel. T=5: Stations A, B, and C sense the channel before transmission, but find it busy. E joins them at T=6. (b) Perhaps the most significant difference on a real Ethernet is that stations close to each other will detect collisions almost immediately; only stations at extreme opposite points will need a full slot time to detect a collision. Suppose stations A and B are close, and C is far away. All transmit at the same time T=0. Then A and B will effectively start their backoff at T≈0; C will on the other hand wait for T=1. If A, B, and C choose the same backoff time, A and B will be nearly a full slot ahead. Interframe spacing is only one-fifth of a slot time and applies to all partici- pants equally; it is not likely to matter here. 50. Here is a simple program: #define USAGE "ether N" // Simulates N ethernet stations all trying to transmit at once; // returns average # of slot times until one station succeeds. #include <iostream.h> #include <stdlib.h> #include <assert.h> #define MAX 1000 /* max # of stations */ class station { public: void reset() { NextAttempt = CollisionCount = 0;} bool transmits(int T) {return NextAttempt==T;} void collide() { // updates station after a collision CollisionCount ++; NextAttempt += 1 + backoff( CollisionCount); Chapter 2 24 //the 1 above is for the current slot } private: int NextAttempt; int CollisionCount; static int backoff(int k) { //choose random number 0..2∧k-1; ie choose k random bits unsigned short r = rand(); unsigned short mask = 0xFFFF >> (16-k); // mask = 2∧k-1 return int (r & mask); } }; station S[MAX]; // run does a single simulation // it returns the time at which some entrant transmits int run (int N) { int time = 0; int i; for (i=0;i<N;i++) { S[i].reset(); } while(1) { int count = 0; // # of attempts at this time int j= -1; // count the # of attempts; save j as index of one of them for (i=0; i<N; i++) { if (S[i].transmits(time)) {j=i; ++count;} } if (count==1) // we are done return time; else if (count>1) { // collisions occurred for (i=0;i<N;i++) { if (S[i].transmits(time)) S[i].collide(); } } ++time; } } int RUNCOUNT = 10000; void main(int argc, char * argv[]) { int N, i, runsum=0; assert(argc == 2); N=atoi(argv[1]); assert(N<MAX); for (i=0;i<RUNCOUNT;i++) runsum += run(N); Chapter 2 25 cout << "runsum = " << runsum << " RUNCOUNT= " << RUNCOUNT << " average: " << ((double)runsum)/RUNCOUNT << endl; return; } Here is some data obtained from it: # stations slot times 5 3.9 10 6.6 20 11.2 40 18.8 100 37.7 200 68.6 51. We alternate N/2 slots of wasted bandwidth with 5 slots of useful bandwidth. The useful fraction is: 5/(N/2 + 5) = 10/(N+10) 52. (a) The program is below. It produced the following output: λ # slot times λ # slot times 1 6.39577 2 4.41884 1.1 5.78198 2.1 4.46704 1.2 5.36019 2.2 4.4593 1.3 5.05141 2.3 4.47471 1.4 4.84586 2.4 4.49953 1.5 4.69534 2.5 4.57311 1.6 4.58546 2.6 4.6123 1.7 4.50339 2.7 4.64568 1.8 4.45381 2.8 4.71836 1.9 4.43297 2.9 4.75893 2 4.41884 3 4.83325 The minimum occurs at about λ=2; the theoretical value of the minimum is 2e − 1 = 4.43656. (b) If the contention period has length C, then the useful fraction is 8/(C +8), which is about 64% for C = 2e − 1. #include <iostream.h> #include <stdlib.h> #include <math.h> const int RUNCOUNT = 100000; // X = X(lambda) is our random variable double X(double lambda) { double u; do { Chapter 2 28 Solutions for Chapter 3 1. The following table is cumulative; at each part the VCI tables consist of the entries at that part and also all previous entries. Note that at the last stage when a connection comes in to A, we assume the VCI used at stage (a) cannot be re- used in the opposite direction, as would be the case for bi-directional circuits (the most common sort). Exercise Switch Input Output part Port VCI Port VCI (a) 1 2 0 3 0 (b) 1 0 0 1 0 2 3 0 0 0 3 0 0 3 0 (c) 1 0 1 1 1 2 3 1 0 1 3 0 1 2 0 (d) 1 2 1 1 2 2 3 2 0 2 3 0 2 3 1 (e) 2 1 0 0 3 3 0 3 1 0 4 2 0 3 0 (f) 1 1 3 2 2 2 1 1 3 3 4 0 0 3 1 2. The answer is in the book. 3. Node A: Destination Next hop Node B: Destination Next hop B C A E C C C E D C D E E C E E F C F E Node C: Destination Next hop Node D: Destination Next hop A A A E B E B E D E C E E E E E F F F E 29 Chapter 3 30 Node E: Destination Next hop Node F: Destination Next hop A C A C B B B C C C C C D D D C F C E C 4. S1: destination port A 1 B 2 default 3 S2: destination port A 1 B 1 C 3 D 3 default 2 S3: destination port C 2 D 3 default 1 S4: destination port D 2 default 1 5. In the following, Si[j] represents the jth entry (counting from 1 at the top) for switch Si. A connects to D via S1[1]—S2[1]—S3[1] A connects to B via S1[2] B connects to D via S1[3]—S2[2]—S3[2] 6. We provide space in the packet header for a second address list, in which we build the return address. Each time the packet traverses a switch, the switch must add the inbound port number to this return-address list, in addition to forwarding the packet out the outbound port listed in the “forward” address. For example, as the packet traverses Switch 1 in Figure 3.7, towards forward address “port 1”, the switch writes “port 2” into the return address. Similarly, Switch 2 must write “port 3” in the next position of the return address. The return address is complete once the packet arrives at its destination. Another possible solution is to assign each switch a locally unique name; that is, a name not shared by any of its directly connected neighbors. Forwarding switches (or the originating host) would then fill in the sequence of these names. When a packet was sent in reverse, switches would use these names to look up the previous hop. We might reject locally unique names, however, on the grounds Chapter 3 33 17. (a) When X sends to W the packet is forwarded on all links; all bridges learn where X is. Y’s network interface would see this packet. (b) When Z sends to X, all bridges already know where X is, so each bridge forwards the packet only on the link towards X, that is, B3→B2→B1→X. Since the packet traverses all bridges, all bridges learn where Z is. Y’s network interface would not see the packet as B2 would only forward it on the B1 link. (c) When Y sends to X, B2 would forward the packet to B1, which in turn forwards it to X. Bridges B2 and B1 thus learn where Y is. B3 and Z never see the packet. (d) When W sends to Y, B3 does not know where Y is, and so retransmits on all links; Z’s network interface would thus see the packet. When the packet arrives at B2, though, it is retransmitted only to Y (and not to B1) as B2 does know where Y is from step (c). B3 and B2 now know where W is, but B1 does not learn where W is. 18. B1 will be the root; B2 and B3 each have two equal length paths (along their upward link and along their downward link) to B1. They will each, indepen- dently, select one of these vertical links to use (perhaps preferring the interface by which they first heard from B1), and disable the other. There are thus four possible solutions. 19. (a) The packet will circle endlessly, in both the M→B2→L→B1 and M→B1→ L→B2 directions. (b) Initially we (potentially) have four packets: one from M clockwise, one from M counterclockwise, and a similar pair from L. Suppose a packet from L arrives at an interface to a bridge Bi, followed immediately via the same interface by a packet from M. As the first packet arrives, the bridge adds 〈L,arrival-interface〉 to the table (or, more likely, updates an existing entry for L). When the second packet arrives, addressed to L, the bridge then decides not to forward it, because it arrived from the interface recorded in the table as pointing towards the destination, and so it dies. Because of this, we expect that in the long run only one of the pair of pack- ets traveling in the same direction will survive. We may end up with two from M, two from L, or one from M and one from L. A specific scenario for the latter is as follows, where the bridges’ interfaces are denoted “top” and “bottom”: 1. L sends to B1 and B2; both place 〈L,top〉 in their table. B1 already has the packet from M in the queue for the top interface; B2 this packet in the queue for the bottom. 2. B1 sends the packet from M to B2 via the top interface. Since the destination is L and 〈L,top〉 is in B2’s table, it is dropped. 3. B2 sends the packet from M to B1 via the bottom interface, so B1 updates its table entry for M to 〈M,bottom〉 Chapter 3 34 4. B2 sends the packet from L to B1 via the bottom interface, causing it to be dropped. The packet from M now circulates counterclockwise, while the packet from L circulates clockwise. 20. (a) In this case the packet would never be forwarded; as it arrived from a given interface the bridge would first record 〈M,interface〉 in its table and then conclude the packet destined for M did not have to be forwarded out the other interface. (b) Initially we would have a copy of the packet circling clockwise (CW) and a copy circling counterclockwise (CCW). This would continue as long as they traveled in perfect symmetry, with each bridge seeing alternating ar- rivals of the packet through the top and bottom interfaces. Eventually, how- ever, something like the following is likely to happen: 0. Initially, B1 and B2 are ready to send to each other via the top interface; both believe M is in the direction of the bottom interface. 1. B1 starts to send to B2 via the top interface (CW); the packet is somehow delayed in the outbound queue. 2. B2 does send to B1 via the top interface (CCW). 3. B1 receives the CCW packet from step 2, and immediately forwards it over the bottom interface back to B2. The CW packet has not yet been delivered to B2. 4. B2 receives the packet from step 3, via the bottom interface. Because B2 currently believes that the destination, M, lies on the bottom interface, B2 drops the packet. The clockwise packet would then be dropped on its next circuit, leaving the loop idle. 21. (a) If the bridge forwards all spanning-tree messages, then the remaining bridges would see networks D,E,F,G,H as a single network. The tree produced would have B2 as root, and would disable the following links: from B5 to A (the D side of B5 has a direct connection to B2) from B7 to B from B6 to either side (b) If B1 simply drops the messages, then as far as the spanning-tree algorithm is concerned the five networks D-H have no direct connection, and in fact the entire extended LAN is partitioned into two disjoint pieces A-F and G- H. Neither piece has any redundancy alone, so the separate spanning trees that would be created would leave all links active. Since bridge B1 still presumably is forwarding other messages, all the original loops would still exist. 22. (a) Whenever any host transmits, the packet collides with itself. (b) It is difficult or impossible to send status packets, since they too would self- collide as in (a). Repeaters do not look at a packet before forwarding, so they wouldn’t be in a position to recognize status packets as such. Chapter 3 35 (c) A hub might notice a loop because collisions always occur, whenever any host transmits. Having noticed this, the hub might send a specific signal out one interface, during the rare idle moment, and see if that signal arrives back via another. The hub might, for example, attempt to verify that when- ever a signal went out port 1, then a signal always appeared immediately at, say, port 3. We now wait some random time, to avoid the situation where a neighboring hub has also noticed the loop and is also disabling ports, and if the situation still persists we disable one of the looping ports. Another approach altogether might be to introduce some distinctive signal that does not correspond to the start of any packet, and use this for hub-to- hub communication. 23. Once we determine that two ports are on the same LAN, we can choose the smaller-numbered port and shut off the other. A bridge will know it has two interfaces on the same LAN when it sends out its initial “I am root” configuration messages and receives its own messages back, without their being marked as having passed through another bridge. 24. A 53-byte ATM cell has 5 bytes of headers, for an overhead of about 9.4% for ATM headers alone. ATM adaptation layers add varying amounts of additional overhead. 25. The drawbacks to datagram routing for small cells are the larger addresses, which would now take up a considerable fraction of each cell, and the considerably higher per-cell processing costs in each router that are not proportional to cell size. 26. Since the I/O bus speed is less than the memory bandwidth, it is the bottle- neck. Effective bandwidth that the I/O bus can provide is 800/2 Mbps because each packet crosses the I/O bus twice. Therefore, the number of interfaces is ⌊400/100⌋ = 4. 27. The answer is in the book. 28. The workstation can handle 1000/2 = 500 Mbps, limited by the I/O bus. Let the packet size be x bits; to support 500,000 packets/second we need a total capacity of 500000 × x bps; equating 5 × 105 × x = 500 × 106 bps, we get x = 1000 bits = 125 bytes. For packet sizes below this, packet forward rate is the limiter, above this the limit is the I/O bus bandwidth. 29. Switch with input FIFO buffering : (a) An input FIFO may become full if the packet at the head is destined for a full output FIFO. Packets that arrive on ports whose input FIFOs are full are lost regardless of their destination. (b) This is called head-of-line blocking. Chapter 3 38 IPv6 uses link-layer fragmentation exclusively; experience had by then estab- lished reasonable MTU values, and also illuminated the performance problems of IPv4-style fragmentation. ( Path-MTU discovery is also mandatory, which means the sender always knows just how large the data passed to IP can be to avoid fragmentation.) Whether or not link-layer fragmentation is feasible appears to depend on the nature of the link; neither version of IP therefore requires it. 42. If the timeout value is too small, we clutter the network with unnecessary re- requests, and halt transmission until the re-request is answered. When a host’s Ethernet address changes, e.g. because of a card replacement, then that host is unreachable to others that still have the old Ethernet address in their ARP cache. 10-15 minutes is a plausible minimal amount of time required to shut down a host, swap its Ethernet card, and reboot. While self-ARP (described in the following exercise) is arguably a better solu- tion to the problem of a too-long ARP timeout, coupled with having other hosts update their caches whenever they see an ARP query from a host already in the cache, these features were not always universally implemented. A reasonable upper bound on the ARP cache timeout is thus necessary as a backup. 43. The answer is maybe, in theory, but the practical consequences rule it out. A MAC address is statically assigned to each hardware interface. ARP mapping enables indirection from IP addresses to the hardware MAC addresses. This allows IP addresses to be dynamically reallocated when the hardware moves to the different network, e.g. when a mobile wireless devices moves to a new access network. So using MAC addresses as IP addresses would mean that we would have to use static IP addresses. Since the Internet routing takes advantage of address space hierarchy (use higher bits for network addresses and lower bits for host addresses), if we would have to use static IP addresses, the routing would be much less efficient. Therefore this design is practically not feasible. 44. After B broadcasts any ARP query, all stations that had been sending to A’s phys- ical address will switch to sending to B’s. A will see a sudden halt to all arriving traffic. (To guard against this, A might monitor for ARP broadcasts purportedly coming from itself; A might even immediately follow such broadcasts with its own ARP broadcast in order to return its traffic to itself. It is not clear, however, how often this is done.) If B uses self-ARP on startup, it will receive a reply indicating that its IP address is already in use, which is a clear indication that B should not continue on the network until the issue is resolved. 45. (a) If multiple packets after the first arrive at the IP layer for outbound delivery, but before the first ARP response comes back, then we send out multiple unnecessary ARP packets. Not only do these consume bandwidth, but, Chapter 3 39 because they are broadcast, they interrupt every host and propagate across bridges. (b) We should maintain a list of currently outstanding ARP queries. Before sending a query, we first check this list. We also might now retransmit queries on the list after a suitable timeout. (c) This might, among other things, lead to frequent and excessive packet loss at the beginning of new connections. 46. (a) Information Distance to Reach Node Stored at Node A B C D E F A 0 ∞ 3 8 ∞ ∞ B ∞ 0 ∞ ∞ 2 ∞ C 3 ∞ 0 ∞ 1 6 D 8 ∞ ∞ 0 2 ∞ E ∞ 2 1 2 0 ∞ F ∞ ∞ 6 ∞ ∞ 0 (b) Information Distance to Reach Node Stored at Node A B C D E F A 0 ∞ 3 8 4 9 B ∞ 0 3 4 2 ∞ C 3 3 0 3 1 6 D 8 4 3 0 2 ∞ E 4 2 1 2 0 7 F 9 ∞ 6 ∞ 7 0 (c) Information Distance to Reach Node Stored at Node A B C D E F A 0 6 3 6 4 9 B 6 0 3 4 2 9 C 3 3 0 3 1 6 D 6 4 3 0 2 9 E 4 2 1 2 0 7 F 9 9 6 9 7 0 47. The answer is in the book. Chapter 3 40 48. D Confirmed Tentative 1. (D,0,-) 2. (D,0,-) (A,8,A) (E,2,E) 3. (D,0,-) (A,8,A) (E,2,E) (B,4,E) (C,3,E) 4. (D,0,-) (A,6,E) (E,2,E) (B,4,E) (C,3,E) (F,9,E) 5. (D,0,-) (A,6,E) (E,2,E) (F,9,E) (C,3,E) (B,4,E) 6. previous + (A,6,E) 7. previous + (F,9,E) 50. Traceroute sends packets with limited TTL values. If we send to an unassigned network, then as long as the packets follow default routes, traceroute will get nor- mal answers. When the packet reaches a default-free (backbone) router, however (or more precisely a router which recognizes that the destination doesn’t exist), the process will abruptly stop. Packets will not be forwarded further. The router that finally realizes the error will send back “ICMP host unreachable” or “ICMP net unreachable”, but this ICMP result may not in fact be listened for by traceroute (is not, in implementations with which I am familiar), in which case the traceroute session will end with timeouts either way. 51. A can reach B and D but not C. Because A hasn’t been configured with subnet information, it treats C and B as being on the same network (it shares a network number with them, being in the same site). To reach B, A sends ARP requests directly to B; these are passed by RB as are the actual Ethernet packets. To reach D, which A recognizes as being on another network, A uses ARP to send to R2. However, if A tries to ARP to C, the request will not pass R1. 52. The cost=1 links show A connects to B and D; F connects to C and E. F reaches B through C at cost 2, so B and C must connect. F reaches D through E at cost 2, so D and E must connect. A reaches E at cost 2 through B, so B and E must connect. These give:   A   D   B   E   C   F As this network is consistent with the tables, it is the unique minimal solution. Chapter 3 43 (b) Without poison reverse, A and B would send each other updates that simply didn’t mention X; presumably (this does depend somewhat on implemen- tation) this would mean that the false routes to X would sit there until they eventually aged out. With poison reverse, such a loop would go away on the first table update exchange. (c) 1. B and A each send out announcements of their route to X via C to each other. 2. C announces to A and B that it can no longer reach X; the announce- ments of step 1 have not yet arrived. 3. B and A receive each others announcements from step 1, and adopt them. 60. We will implement hold-down as follows: when an update record arrives that indicates a destination is unreachable, all subsequent updates within some given time interval are ignored and discarded. Given this, then in the EAB network A ignores B’s reachability news for one time interval, during which time A presumably reaches B with the correct un- reachability information. Unfortunately, in the EABD case, this also means A ignores the valid B–D–E path. Suppose, in fact, that A reports its failure to B, D reports its valid path to B, and then B reports to A, all in rapid succession. This new route will be ignored. One way to avoid delaying discovery of the B–D–E path is to keep the hold- down time interval as short as possible, relying on triggered updates to spread the unreachability news quickly. Another approach to minimizing delay for new valid paths is to retain route infor- mation received during the hold-down period, but not to use it. At the expiration of the hold-down period, the sources of such information might be interrogated to determine whether it remains valid. Otherwise we might have to wait not only the hold-down interval but also wait until the next regular update in order to receive the new route news. 61. We will also assume that each node increments its sequence number only when there is some change in the state of its local links, not for timer expirations (“no packets time out”). The central point of this exercise is intended to be an illustration of the “bringing- up-adjacencies” process: in restoring the connection between the left- and right- hand networks, it is not sufficient simply to flood the information about the re- stored link. The two halves have evolved separately, and full information must be exchanged. Given that each node increments its sequence number whenever it detects a change in its links to its neighbors, at the instant before the B—F link is restored the LSP data for each node is as follows: Chapter 3 44 node seq# connects to A 2 B,C,D B 2 A,C C 2 A,B,D D 2 A,C F 2 G G 2 F,H H 1 G When the B–F link is restored, OSPF has B and F exchange their full databases of all the LSPs they have seen with each other. Each then floods the other side’s LSPs throughout its side of the now-rejoined network. These LSPs are as in the rows of the table above, except that B and F now each have sequence numbers of 3. The initial sequence number of an OSPF node is actually −231 + 1. 62. Step confirmed tentative 1 (A,0,-) 2 (A,0,-) (D,2,D) (B,5,B) 3 (A,0,-) (D,2,D) (B,4,D) (E,7,D) 4 (A,0,-) (D,2,D) (B,4,D) (E,6,D) (C,8,D) 5 (A,0,-) (D,2,D) (B,4,D) (E,6,D) (C,7,D) 6 (A,0,-) (D,2,D) (B,4,D) (E,6,D) (C,7,D) 63. The answer is in the book. 64. (a) This could happen if the link changed state recently, and one of the two LSP’s was old. (b) If flooding is working properly, and if A and B do in fact agree on the state of the link, then eventually (rather quickly) whichever of the two LSP’s was old would be updated by the same sender’s newer version, and reports from the two sides of C would again agree. 65. This exercise does not, alas, quite live up to its potential. The central idea behind Ethernet bridges is that they learn new host locations by examining ordinary data packets, and do not receive new-host notices from other bridges. Thus the first part of the final sentence of the exercise effectively removes from consideration a genuine bridge-style approach for routers. While there are good reasons for this, outlined in the final paragraph below, a better way to phrase this might be to ask why IP routers do not work like learning bridges, or, even more basically, why bridges do not use vector-distance routing. Furthermore, a consequence of the second half of the final sentence is that there is no real difference in the cases (a) and (b) with bridge-style learning. Proper configuration would prevent address-assignment inconsistencies in each, which apparently had been the original concern. Chapter 3 45 So we are left with a model of “bridge-style learning” in which routers learn about each other through messages each sends periodically to other routers. This is not terribly bridge-like. Moreover, it is not clear what it means for routers to learn of each other by this method; if they are sending each other messages then either they already know about each other or else some form of broadcast is used. And broadcast runs into serious problems if there is a possibility of loops. If routers are sending out messages that are just broadcast on directly connected subnets, listing all the subnets they know about, and these messages include distance information, then they are more-or-less doing vector-distance routing. One routing approach that might qualify under the terms of the exercise is if routers send out link-state-style periodic messages identifying their directly connected networks, and that these are propagated by flooding. The main reason that IP routers cannot easily learn new subnet locations by ex- amination of data packets is that they would then have to fall back on network- wide broadcast for delivery to unknown subnets. IP does indeed support a notion of broadcast, but broadcast in the presence of loop topology (which IP must sup- port) fails rather badly unless specific (shortest-path) routes to each individual subnet are already known by the routers. And even if some alternative mech- anism were provided to get routing started, path-length information would not be present in data packets, so future broadcasting would remain loop-unsafe. We note too that subnet routing requires that the routers learn the subnet masks, which are also not present in data packets. Finally, bridges may prefer passive learning simply because it avoids bridge-to-bridge compatibility issues. 66. If an IP packet addressed to a specific host A were inadvertently broadcast, and all hosts on the subnet did forwarding, then A would be inundated with multiple copies of the packet. Other reasons for hosts’ not doing routing include the risk that misconfigured hosts could interfere with routing, or might not have up-to-date tables, or might not even participate in the same routing protocol that the real routers were using. 68. (a) Giving each department a single subnet, the nominal subnet sizes are 27, 26, 25, 25 respectively; we obtain these by rounding up to the nearest power of 2. For example, a subnet with 128 addresses is large enough to contain 75 hosts. A possible arrangement of subnet numbers is as follows. Subnet numbers are in binary and represent an initial segment of the bits of the last byte of the IP address; anything to the right of the / represents host bits. The / thus represents the subnet mask. Any individual bit can, by symmetry, be flipped throughout; there are thus several possible bit assignments. A 0/ one subnet bit, with value 0; seven host bits B 10/ C 110/ D 111/ Solutions for Chapter 4 1. (a) Q will receive three routes to P, along links 1, 2, and 3. (b) A−→B traffic will take link 1. B−→A traffic will take link 2. Note that this strategy minimizes cost to the source of the traffic. (c) To have B−→A traffic take link 1, Q could simply be configured to prefer link 1 in all cases. The only general solution, though, is for Q to accept into its routing tables some of the internal structure of P, so that Q for example knows where A is relative to links 1 and 2. (d) If Q were configured to prefer AS paths through R, or to avoid AS paths involving links 1 and 2, then Q might route to P via R. 2. In the diagram below, the shortest path between A and B (measured by number of router hops) passes through AS P, AS Q, and AS P. !"#$# !# %# !"#&# While such a path might be desirable (the path via Q could be much faster or offer lower latency, for example), BGP would see the same AS number (for AS P) twice in the AS PATH. To BGP, such an AS PATH would appear as a loop, and be disallowed. 3. (a) The diameter D of a network organized as a binary tree, with root node as “backbone”, would be of order log2 A. The diameter of a planar rectangular grid of connections would be of order √ A. (b) For each AS S, the BGP node needs to maintain a record of the AS PATH to S, requiring 2×actual path length bytes. It also needs a list of all the net- works within S, requiring 4×number of networks bytes. Summing these up for all autonomous systems, we get 2AD + 4N , or 2AC log A + 4N and 2AC √ A + 4N for the models from part (a), where C is a constant. 4. Many arrangements are possible, although perhaps not likely. Here is an allo- cation scheme that mixes two levels of geography with providers; it works with 48-bit InterfaceIDs. The subdivisions become much more plausible with 64-bit InterfaceIDs. 48 Bytes 0-1: 3-bit prefix + country where site is located (5 bits is not enough to specify the country) Bytes 2-3: provider Bytes 4-5: Geographical region within provider Bytes 6-8: Subscriber (large providers may have >64K subscribers) Bytes 8-9: (Byte 8 is oversubscribed) Subnet Bytes 10-15: InterfaceID 5. (a) P’s table: address nexthop C2.0.0.0/8 Q C3.0.0.0/8 R C1.A3.0.0/16 PA C1.B0.0.0/12 PB Q’s table: address nexthop C1.0.0.0/8 P C3.0.0.0/8 R C2.0A.10.0/20 QA C2.0B.0.0/16 QB R’s table: address nexthop C1.0.0.0/8 P C2.0.0.0/8 Q (b) The same, except for the following changes of one entry each to P’s and R’s tables: P: C3.0.0.0/8 Q // was R R: C1.0.0.0/8 Q // was P (c) Note the use of the longest-match rule to distinguish the entries for Q & QA in P’s table, and for P & PA in Q’s table. P’s table: address nexthop C2.0.0.0/8 Q C2.0A.10.0/20 QA // for QA C1.A3.0.0/16 PA C1.B0.0.0/12 PB Q’s table: address nexthop C1.0.0.0/8 P C1.A3.0.0/16 PA // for PA C2.0A.10.0/20 QA C2.0B.0.0/16 QB 6. The longest-match rule is intended for this. Note that all providers now have to include entries for PA and QB, though. 49 P’s table: address nexthop C2.0.0.0/8 Q C3.0.0.0/8 R C1.A3.0.0/16 Q // entry for PA C1.B0.0.0/12 PB C2.0B.0.0/16 R // entry for QB Q’s table: address nexthop C1.0.0.0/8 P C3.0.0.0/8 R C1.A3.0.0/16 PA // now Q’s customer C2.0A.10.0/20 QA C2.0B.0.0/16 R // entry for QB R’s table: address nexthop C1.0.0.0/8 P C2.0.0.0/8 Q C1.A3.0.0/16 Q // R also needs an entry for PA C2.0B.0.0/16 QB // QB is now R’s customer 7. (a) Inbound traffic takes a single path to the organization’s address block, which corresponds to the organization’s “official” location. This means all traffic enters the organization at a single point even if much shorter al- ternative routes exist. (b) For outbound traffic, the organization could enter into its own tables all the highest-level geographical blocks for the outside world, allowing the orga- nization to route traffic to the exit geographically closest to the destination. (c) For an approach such as the preceding to work for inbound traffic as well, the organization would have to be divided internally into geographically based subnets, and the outside world would then have to accept routing entries for each of these subnets. Consolidation of these subnets into a single external entry would be lost. (d) We now need each internal router to have entries for internal routes to all the other internal IP networks; this suffices to ensure internal traffic never leaves. 8. Perhaps the primary problem with geographical addressing is what to do with geographically dispersed sites that have their own internal connections. Routing all traffic to a single point of entry seems inefficient. At the time when routing-table size became a critical issue, most providers were regional and thus provider-based addressing was more or less geographical. 9. As described, ISP X is a customer of both ISP A and ISP B. If he advertises a path learned from A to ISP B, then B may send him traffic that he will then 50 Source Source Figure 1: Answers to question 4.16 53 Chapter 4 54 22. MPLS has been thought to improve router performance because each label is a direct index in the routing table, and thus an MPLS-only router could avoid running the more complex longest IP prefix match algorithm. But packet for- warding has many other aspects that influence performance, such as enqueueing packets and switching them across a backplane. These aspects are independent of the forwarding algorithm and have turned out to be the dominant performance- influencing factors. 23. (a) 8 bytes are needed to attach two MPLS labels. (b) 20 bytes are needed for an additional IP header. (c) Bandwidth efficiency for MPLS is 300/308 = 0.97, and for IP is 300/320 = 0.94. For 64-byte packets, MPLS has 64/72 = 0.89 and IP has 64/84 = 0.76. MPLS is relatively more efficient when the payload size is smaller. 24. Source routing cannot specify a long path because of the option size limit. Sec- ond, IP option processing is considerably more complex than normal IP forward- ing, and can cause significant performance penalties. Finally, source routing can- not readily aggregate the traffic with the same route into one class; by contrast, MPLS can aggregate such traffic as one FEC, represented by a single label at each hop, thus improving scalability. 25. A correspondent node has no way of knowing that the IP address of a mobile node has changed, and hence no way to send it a packet. A TCP connection will break if the IP address of one endpoint changes. 26. The home agent and the mobile node may be very far apart, leading to suboptimal routing. 27. Without some sort of authentication of updates, an attacker could tell the corre- spondent node to send all the traffic destined for a mobile node to a node that the attacker controls, thus stealing the traffic. Or, an attacker can tell any number of correspondent nodes to send traffic to some other node that the attacker wishes to flood with traffic. Solutions for Chapter 5 1. (a) An application such as TFTP, when sending initial connection requests, might want to know the server isn’t accepting connections. (b) On typical Unix systems, one needs to open a socket with attribute IP RAW (traditionally requiring special privileges) and receive all ICMP traffic. (c) A receiving application would have no way to identify ICMP messages as such, or to distinguish between these messages and protocol-specific data. 2. (a) In the following, the client receives file “foo” when it thinks it has requested “bar”. 1. The client sends a request for file “foo”, and immediately aborts locally. The request, however, arrives at the server. 2. The client sends a new request, for file “bar”. It is lost. 3. The server responds with first data packet of “foo”, answering the only request it has actually seen. (b) Requiring the client to use a new port number for each separate request would solve the problem. To do this, however, the client would have to trust the underlying operating system to assign a new port number each time a new socket was opened. Having the client attach a timestamp or random number to the file request, to be echoed back in each data packet from the server, would be another approach fully under the application’s control. 3. The TFTP protocol is a reasonable model although with some idiosyncrasies that address other issues; see RFC 1350. TFTP’s first packet, called Read Request, RRQ, simply names a file. Upon receipt, the server creates a new ephemeral port from which to answer, and begins sending data from that new port. The client assumes that the first well-formed packet it receives is this server data, and records the data’s source port. Any subsequent packets from a different port are discarded and an error response is sent. The basic stop-and-wait transfer is standard, although one must decide if se- quence numbers are allowed to wrap around or not. Here are approaches, TFTP’s and otherwise, for (a)-(c): (a) The most basic approach here is to require the server to keep track of con- nections, as long as they are active. The problem with this is that the client is likely to be simply an application, and can exit at any time. It may exit and retransmit a request for a different file, or a new request for the same file, before the server knows there was a problem or status change. A more robust mechanism for this situation might be a CONNECT NUM field, either chosen randomly or clock-driven or incremented via some cen- tral file for each client connection attempt. Such a field would correspond roughly with TCP’s initial sequence number (ISN). 55 Chapter 5 58 and containing (among other things) data fields lISN and rISN for the local and remote ISNs. if (connections to lport are not being accepted) send RST else if (there is no entry in T for 〈lport, raddr, rport〉) // new SYN Put 〈lport, raddr, rport〉 into a table, Set rISN to be the received packet’s ISN, Set lISN to be our own ISN, Send the reply SYN+ACK Record the connection as being in state SYN RECD else if (T [〈lport, raddr, rport〉] already exists) if (ISN in incoming packet matches rISN from the table) // SYN is a duplicate; ignore it else send RST to 〈raddr, rport〉) 15. x =< y if and only if (y − x) ≥ 0, where the expression y − x is taken to be signed even though x and y are not. 16. (a) A would send an ACK to B for the new data. When this arrived at B, however, it would lie outside the range of “acceptable ACKs” and so B would respond with its own current ACK. B’s ACK would be acceptable to A, and so the exchanges would stop. If B later sent less than 100 bytes of data, then this exchange would be repeated. (b) Each end would send an ACK for the new, forged data. However, when re- ceived both these ACKs would lie outside the range of “acceptable ACKs” at the other end, and so each of A and B would in turn generate their current ACK in response. These would again be the ACKs for the forged data, and these ACKs would again be out of range, and again the receivers would generate the current ACKs in response. These exchanges would continue indefinitely, until one of the ACKs was lost. If A later sent 200 bytes of data to B, B would discard the first 100 bytes as duplicate, and deliver to the application the second 100 bytes. It would acknowledge the entire 200 bytes. This would be a valid ACK for A. For more examples of this type of scenario, see Joncheray, L; A Simple Ac- tive Attack Against TCP; Proceedings of the Fifth USENIX UNIX Security Symposium, June, 1995. 17. Let H be the host to which A had been connected; we assumed B is able to guess H. As we are also assuming telnet connections, B can restrict probes to H’s telnet port (port 23). First, B needs to find a port A had been using. For various likely ephemeral port numbers N, B sends an ACK packet from port N to 〈H,telnet〉. For many implementations, ephemeral ports start at some fixed value (e.g. N=1024) and increase sequentially; for an unshared machine it is unlikely that very many ports Chapter 5 59 had been used. If A had had no connection from port N, H will reply to B with a RST packet. But if H had had an outstanding connection to 〈A,N〉, then H will reply with either nothing (if B’s forged ACK happened to be Acceptable, i.e. in the current window at the point when A was cut off), or the most recent Accept- able ACK (otherwise). Zero-byte data packets can with most implementations also be used as probes here. Once B finds a successful port number, B then needs to find the sequence number H is expecting; once B has this it can begin sending data on the connection as if it were still A. To find the sequence number, B again takes advantage of the TCP requirement that H reply with the current ACK if B sends an ACK or DATA inconsistent with H’s current receive window [that is, an “unacceptable ACK”]. In the worst case B’s first probe lies in H’s window, in which case B needs to send a second probe. 18. We keep a table T, indexed by 〈address,port〉 pairs, and containing an integer field for the ISN and a string field for the connection’s DATA. We will use =< for sequence number comparison as in Exercise 15. if (SYN flag is set in P.TCPHEAD.Flags) Create the entry T[〈P.IPHEAD.SourceAddr,P.TCPHEAD.SrcPort〉] T[...].ISN = P.TCPHEAD.SequenceNum T[...].DATA = 〈empty string〉 else See if DATA bit in P.TCPHEAD.Flags is set; if not, ignore Look up T[〈P.IPHEAD.SourceAddr,P.TCPHEAD.SrcPort〉] (if not found, ignore the packet) See if P.TCPHEAD.SequenceNum =< T[...].ISN+100. If so, append the appropriate portion of the packet’s data to T[...].DATA 19. (a) 1. C connects to A, and gets A’s current clock-based ISNA1. 2. C sends a SYN packet to A, purportedly from B. A sends SYN+ACK, with ISNA2 to B, which we are assuming is ignored. 3. C makes a guess at ISNA2, e.g. ISNA1 plus some suitable increment, and sends the appropriate ACK to A, along with some data that has some possibly malign effect on A. As in Step 2, this packet too has a forged source address of B. 4. C does nothing further, and the connection either remains half-open indefinitely or else is reset, but the damage is done. (b) In one 40 ms period there are 40 ms/4µsec = 10,000 possible ISNAs; we would expect to need about 10,000 tries. Further details can be found in Morris, RT; A Weakness in the 4.2BSD UNIX TCP/IP Software; Computing Science Technical Report No. 117, AT&T Bell Laboratories, Murray Hill, NJ, 1985. Chapter 5 60 20. (a) T=0.0 ‘a’ sent T=1.0 ‘b’ collected in buffer T=2.0 ‘c’ collected in buffer T=3.0 ‘d’ collected in buffer T=4.0 ‘e’ collected in buffer T=4.1 ACK of ‘a’ arrives, “bcde” sent T=5.0 ‘f’ collected in buffer T=6.0 ‘g’ collected in buffer T=7.0 ‘h’ collected in buffer T=8.0 ‘i’ collected in buffer T=8.2 ACK arrives; “fghi” sent (b) The user would type ahead blindly at times. Characters would be echoed between 4 and 8 seconds late, and echoing would come in chunks of four or so. Such behavior is quite common over telnet connections, even those with much more modest RTTs, but the extent to which this is due to the Nagle algorithm is unclear. (c) With the Nagle algorithm, the mouse would appear to skip from one spot to another. Without the Nagle algorithm the mouse cursor would move smoothly, but it would display some inertia: it would keep moving for one RTT after the physical mouse were stopped. (We’ve assumed in this case that the mouse and the display are at the same end of the connection.) 21. (a) We have 4096 ports; we eventually run out if the connection rate averages more than 4096/60 = 70 per sec. (The range used here for ephemeral ports, while small, is typical of older TCP implementations.) (b) In the following we let A be the host that initiated the close (and that is in TIME WAIT); the other host is B. A is nominally the client; B the server. If B fails to receive an ACK of its final FIN, it will eventually retransmit that FIN. So long as A remains in TIME WAIT it is supposed to reply again with the corresponding ACK. If the sequence number of the FIN were incorrect, A would send RST. If we allow reopening before TIME WAIT expires, then a given very-late- arriving FIN might have been part of any one of a number of previous connections. For strict compliance, host A would have to maintain a list of prior connections, and if an old FIN arrived (as is theoretically possible, given that we are still within the TIME WAIT period for the old connec- tion), host A would consult this list to determine whether the FIN had an appropriate sequence number and hence whether an ACK or RST should be sent. Simply responding with an ACK to all FINs with sequence numbers before the ISN of the current connection would seem reasonable, though. The old connection, after all, no longer exists at B’s end to be reset, and A knows this. A knows, in fact, that a prior final ACK or RST that it sent in response to B’s FIN was received by B, since B allowed the connection to be reopened, and so it might justifiably not send anything. Chapter 5 63 row # SampleRTT EstRTT Dev diff TimeOut 19 1.00 1.24 0.72 -0.27 4.13 20 4.00 1.58 0.98 2.76 5.50 21 1.00 1.51 0.93 -0.58 5.22 22 1.00 1.45 0.88 -0.51 4.95 23 1.00 1.39 0.82 -0.45 4.68 24 1.00 1.34 0.77 -0.39 4.42 25 1.00 1.30 0.72 -0.34 4.16 26 4.00 1.64 0.96 2.70 5.49 27 1.00 1.56 0.92 -0.64 5.25 28 1.00 1.49 0.88 -0.56 4.99 29 1.00 1.43 0.83 -0.49 4.74 30 1.00 1.37 0.78 -0.43 4.48 31 1.00 1.33 0.73 -0.37 4.24 32 4.00 1.66 0.97 2.67 5.54 29. Here is the table of the updates to the EstRTT, etc statistics. Packet loss is ig- nored; the SampleRTTs given may be assumed to be from successive singly transmitted segments. Note that the first column, therefore, is simply a row num- ber, not a packet number, as packets are sent without updating the statistics when the measurements are ambiguous. Note also that both algorithms calculate the same values for EstimatedRTT; only the TimeOut calculations vary. new TimeOut old TimeOut SampleRTT EstRTT Dev diff EstRTT+4×Dev 2×EstRTT 1.00 0.10 1.40 2.00 1 5.00 1.50 0.59 4.00 3.85 3.00 2 5.00 1.94 0.95 3.50 5.74 3.88 3 5.00 2.32 1.22 3.06 7.18 4.64 4 5.00 2.66 1.40 2.68 8.25 5.32 New algorithm (TimeOut = EstimatedRTT+ 4×Deviation): There are a total of three retransmissions, two for packet 1 and one for packet 3. The first packet after the change times out at T=1.40, the value of TimeOut at that moment. It is retransmitted, with TimeOut backed off to 2.8. It times out again 4.2 sec after the first transmission, and TimeOut is backed off to 5.6. At T=5.0 the first ACK arrives and the second packet is sent, using the backed-off TimeOut value of 5.6. This second packet does not time out, so this constitutes an unambiguous RTT measurement, and so timing statistics are updated to those of row 1 above. When the third packet is sent, with TimeOut=3.85, it times out and is retrans- mitted. When its ACK arrives the fourth packet is sent, with the backed-off TimeOut value, 2×3.85 = 7.70; the resulting RTT measurement is unambiguous so timing statistics are updated to row 2. When the fifth packet is sent, Time- Out=5.74 and no further timeouts occur. Chapter 5 64 If we continue the above table to row 9, we get the maximum value for TimeOut, of 10.1, at which point TimeOut decreases toward 5.0. Original algorithm (TimeOut = 2×EstimatedRTT): There are five retransmissions: for packets 1, 2, 4, 6, 8. The first packet times out at T=2.0, and is retransmitted. The ACK arrives before the second timeout, which would have been at T=6.0. When the second packet is sent, the backed-off TimeOut of 4.0 is used and we time out again. TimeOut is now backed off to 8.0. When the third packet is sent, it thus does not time out; statistics are updated to those of row 1. The fourth packet is sent with TimeOut=3.0. We time out once, and then trans- mit the fifth packet without timeout. Statistics are then updated to row 2. This pattern continues. The sixth packet is sent with TimeOut = 3.88; we again time out once, send the seventh packet without loss, and update to row 3. The eighth packet is sent with TimeOut=4.64; we time out, back off, send packet 9, and update to row 4. Finally the tenth packet does not time out, as TimeOut=2×2.66=5.32 is larger than 5.0. TimeOut continues to increase monotonically towards 10.0, as EstimatedRTT converges on 5.0. 30. Let the real RTT (for successful transmissions) be 1.0 units. By hypothesis, ev- ery packet times out once and then the retransmission is acknowledged after 1.0 units; this means that each SampleRTT measurement is TimeOut+1 = Esti- matedRTT+1. We then have EstimatedRTT = α× EstimatedRTT + β× SampleRTT = EstimatedRTT + β×(SampleRTT −EstimatedRTT). ≥ EstimatedRTT + β Thus it follows that the N th EstimatedRTT is greater than or equal to Nβ. Without the assumption TimeOut = EstimatedRTT we still have SampleRTT − EstimatedRTT ≥ 1 and so the above argument still applies. 31. For the steady state, assume the true RTT is 3 and EstimatedRTT is 1. At T=0 we send a data packet. Since TimeOut is twice EstimatedRTT=1, at T=2 the packet is retransmitted. At T=3 the ACK of the original packet returns (because the true RTT is 3); measured SampleRTT is thus 3−2 = 1; this equals Estimat- edRTT and so there is no change. This is illustrated by the following diagram: Chapter 5 65 Sender Receiver Packet ACK Packet EstimatedRTT SampleRTT T=0 T=1 T=2 T=3 Timeout and Retransmission To get to such a steady state, assume that originally RTT = EstimatedRTT = 1.45, say, and RTT then jumps to 3.0 as above. The first packet sent under the new rules will time out and be retransmitted at T=2.9; when the ACK arrives at T=3.0 we record SampleRTT = 0.1. This causes EstimatedRTT to decrease. It will continue to grow smaller, monotonically (at least if β is not too large), converging on the value 1.0 as above. 32. A FIN or RST must lie in the current receive window. A RST outside this win- dow is ignored; TCP responds to an out-of-window FIN with the current ACK: If an incoming segment is not acceptable, an acknowledgment should be sent in reply (unless the RST bit is set, if so drop the segment and return) [RFC793] Note that a RST can lie anywhere within the current window; its sequence num- ber need not be the next one in sequence. If a FIN lies in the current window, then TCP waits for any remaining data and closes the connection. If a RST lies in the current window then the connection is immediately closed: If the RST bit is set then, any outstanding RECEIVEs and SEND should receive “reset” responses. All segment queues should be flushed. Users should also receive an unsolicited general “connection reset” signal. Enter the CLOSED state, delete the TCB, and return. 33. (a) The first incarnation of the connection must have closed successfully and the second must have opened; this implies the exchange of FIN and SYN packets and associated ACKs. The delayed data must also have been suc- cessfully retransmitted. (b) One plausible hypothesis is that two routes were available from one host to the other. Traffic flowed on one link, which suddenly developed severe congestion or routing-loop delays at which point subsequent traffic was switched to the other, now-faster, link. It doesn’t matter whether the two Chapter 5 68 (c) The rlogin/rsh protocol authenticates clients by seeing that they are using a “reserved” port on the sending host (normally, a port only available to system-level processes). This would no longer be possible. However, the following variation would still be possible: when an rsh server host S receives a client request from host C, with connection num- ber N, then S could authenticate the request with C by initiating a second connection to a reserved port on C, whereupon some sort of authentication application on C would verify that connection number N was indeed being used by an authorized rsh client on C. Note that this scheme implies that connection numbers are at least visible to the applications involved. 37. (a) A program that connect()s, and then sends whatever is necessary to get the server to close its end of the connection (eg the string “QUIT”), and then sits there, idle but not disconnecting, will suffice. Note that the server has to be willing to initiate the active close based on some client action. (b) Alas, most telnet clients do not work here. Although many can connect to an arbitrary port, and issue a command such as QUIT to make the server initiate the close, they generally do close immediately in response to re- ceiving the server’s FIN. However, the sock program, written by W. Richard Stevens, can be used instead. In the (default) client mode, it behaves like a command-line telnet. The option -Q 100 makes sock wait 100 seconds after receiving the server FIN before it closes its end of the connection. Thus the command sock -Q 100 hostname 25 can be used to demonstrate FIN WAIT 2 with an SMTP (email) server (port 25) on hostname, using the QUIT command. sock is available from http://www.icir.org/christian/sock.html 38. Let A be the closing host and B the other endpoint. A sends message1, pauses, sends message2, and then closes its end of the connection for reading. B gets message1 and sends a reply, which arrives after A has performed the half-close. B doesn’t read message2 immediately; it remains in the TCP layer’s buffers. B’s reply arrives at A after the latter has half-closed, and so A responds with RST as per the quoted passage from RFC 1122. This RST then arrives at B, which aborts the connection and the remaining buffer contents (i.e. message2) are lost. Note that if A had performed a full-duplex close, the same scenario can occur. However, it now depends on B’s reply crossing A’s FIN in the network. The half-close-for-reading referred to in this exercise is actually purely a local state change; a connection that performs a half-close closing its end for writing may however send a FIN segment to indicate this state to the other endpoint. 39. Incrementing the Ack number for a FIN is essential, so that the sender of the FIN can determine that the FIN was received and not just the preceding data. For a SYN, any ACK of subsequent data would increment the acknowledgment number, and any such ACK would implicitly acknowledge the SYN as well (data Chapter 5 69 cannot be ACKed until the connection is established). Thus, the incrementing of the sequence number here is a matter of convention and consistency rather than design necessity. 40. (a) One method would be to invent an option to specify that the first n bytes of the TCP data should be interpreted as options. (b) A TCP endpoint receiving an unknown option might • close/abort the connection. This makes sense if the connection cannot meaningfully continue when the option isn’t understood. • ignore the option but keep the TCP data. This is the current RFC 1122 requirement. • send back “I don’t understand”. This is simply an explicit form of the previous response. A refinement might be to send back some kind of list of options the host does understand. • discard the accompanying the TCP data. One possible use might be if the data segment were encrypted, or in a format specified by the option. Some understanding would be necessary regarding sequence numbers for this to make sense; if the entire TCP data segment was an extended option block then the sequence numbers shouldn’t increase at all. • discard the first n bytes of the TCP data. This is an extension of the previous strategy to handle the case where the first n bytes of the TCP data was to be interpreted as an expanded options block; it is not clear though when the receiver might understand n but not the option itself. 41. TCP faces two separate crash-and-reboot scenarios: a crash can occur in the middle of a connection, or between two consecutive incarnations of a connection. The first leads to a “half-open” connection where one endpoint has lost all state regarding the connection; if either the stateless side sends a new SYN or the stateful side sends new data, the other side will respond with RST and the half- open connection will be dissolved bilaterally. If one host crashes and reboots between two consecutive connection incarna- tions, the only way the first incarnation could affect the second is if a late-arriving segment from the first happens to fit into the receive window of the second. TCP establishes a quasi-random initial sequence number during its three-way hand- shake at connection open time. A 64KB window, the maximum allowed by the original TCP, spans less than 0.0015% of the sequence number space. Therefore, there is very little chance that data from a previous incarnation of the connection will happen to fall in the current window; any data outside the window is dis- carded. (TCP also is supposed to implement “quiet time on startup”, an initial 1×MSL delay for all connections after bootup.) 42. (a) Non-exclusive open, reading block N, writing block N, and seeking to block N all are idempotent, i.e. have the same effect whether executed once or twice. Chapter 5 70 (b) create() is idempotent if it means “create if nonexistent, or open if it exists already”. mkdir() is idempotent if the semantics are “create the given di- rectory if it does not exist; otherwise do nothing”. delete() (for either file or directory) works this way if its meaning is “delete if the object is there; otherwise, ignore.” Operations fundamentally incompatible with at-least-once semantics in- clude exclusive open (and any other form of file locking), and exclusive create. (c) The directory-removing program would first check if the directory exists. If it does not, it would report its absence. If it does exist, it invokes the system call rmdir(). 43. (a) The problem is that reads aren’t serviced in FIFO order; disk controllers typically use the “elevator” or SCAN algorithm to schedule writes, in which the pool of currently outstanding writes is sorted by disk track number and the writes are then executed in order of increasing track number. Using a single channel would force writes to be executed serially even when such a sequence required lots of otherwise-unnecessary disk head motion. If a pool of N sequential channels were used, the disk controller would at any time have about N writes to schedule in the order it saw fit. (b) Suppose a client process writes some data to the server, and then the client system shuts down “gracefully”, flushing its buffers (or avails itself of some other mechanism to flush the buffer cache). At this point data on a local disk would be safe; however, a server crash would now cause the loss of client data remaining in the server’s buffers. The client might never be able to verify that the data was safely written out. (c) One approach would be to modify a protocol that uses sequential channels to support multiple independent outstanding requests on a single logical channel, and to support replies in an arbitrary order, not necessarily that in which the corresponding requests were received. Such a mechanism would allow the server to respond to multiple I/O requests in whatever order was most convenient. A subsequent request could now no longer serve as an ACK of a previous reply; ACKs would have to be explicit and non-cumulative. There would be changes in retransmission management as well: the client would have to maintain a list of the requests that hadn’t yet been answered and the server would have to maintain a list of replies that had been sent but not acknowledged. Some bound on the size of these lists (corresponding to window size) would be necessary. 44. (a) The client sends the request. The server executes it (and successfully com- mits any resulting changes to disk), but then crashes just before sending its reply. The client times out and resends the request, which is executed a second time by the server as it restarts. Chapter 5 73 52. (a) The answer here depends on how closely frame transmission is synchro- nized with frame display. Assuming playback buffers on the order of a full frame or larger, it seems likely that receiver frame-display finish times would not be synchronized with frame transmission times, and thus would not be particularly synchronized from receiver to receiver. In this case, receiver synchronization of RTCP reports with the end of frame display would not result in much overall synchronization of RTCP traffic. In order to achieve such synchronization, it would be necessary to have both a very uniform latency for all receivers and a rather low level of jitter, so that receivers were comfortable maintaining a negligible playback buffer. It would also be necessary, of course, to disable the RTCP randomization factor. The number of receivers, however, should not matter. (b) The probability that any one receiver sends in the designated 5% subinter- val is 0.05, assuming uniform distribution; the probability that all 10 send in the subinterval is 0.0510, which is negligible. (c) The probability that one designated set of five receivers sends in the desig- nated interval, and the other five do not, is (.05)5 × (.95)5. There are (10 choose 5) = 10!/5!5! ways of selecting five designated receivers, and so the probability that some set of five receivers all transmit in the designated in- terval is (10 choose 5) ×(.05)5×(.95)5 = 252×0.0000002418 = 0.006%. Multiplying by 20 gives a rough estimate of about 0.12% for the probabil- ity of an upstream traffic burst rivaling the downstream burst, in any given reply interval. 53. If most receivers are reporting high loss rates, a server might consider throttling back. If only a few receivers report such losses, the server might offer referrals to lower-bandwidth/lower-resolution servers. A regional group of receivers re- porting high losses might point to some local congestion; as RTP traffic is often tunneled, it might be feasible to address this by re-routing traffic. As for jitter measurements, we quote RFC 1889: The interarrival jitter field provides a second short-term measure of network congestion. Packet loss tracks persistent congestion while the jitter measure tracks transient congestion. The jitter measure may indicate congestion before it leads to packet loss. 54. Many answers are possible here. RTT estimation, and hence calculation of suit- able timeout values, is more difficult than TCP because of the lack of a closed feedback loop between sender and receiver. The solution could include looking for gaps in the RTP sequence number space. Running another protocol on top of RTP (see DCCP, RFC 4340, for example) to detect losses via an acknowledg- ment mechanism is another option. Chapter 5 74 Solutions for Chapter 6 1. (a) From the application’s perspective, it is better to define flows as process- to-process. If a flow is host-to-host, then an application running on a multi- user machine may be penalized (by having its packets dropped) if another application is heavily using the same flow. However, it is much easier to keep track of host-to-host flows; routers need only look at the IP ad- dresses to identify the flow. If flows are process-to-process (i.e. end-to- end), routers must also extract the TCP or UDP ports that identify the end- points. In effect, routers have to do the same demultiplexing that is done on the receiver to match messages with their flows. (b) If flows are defined on a host-to-host basis, then FlowLabel would be a hash of the host-specific information; that is, the IP addresses. If flows are process-to-process, then the port numbers should be included in the hash input. 2. (a) In a rate-based TCP the receiver would advertise a rate at which it could receive data; the sender would then limit itself to this rate, perhaps making use of a token bucket filter with small bucket depth. Congestion-control mechanisms would also be converted to terms of throttling back the rate rather than the window size. Note that a window-based model sending one window-full per RTT automatically adjusts its rate inversely proportional to the RTT; a rate-based model might not. Note also that if an ACK arrives for a large amount of data, a window-based mechanism may immediately send a burst of a corresponding large amount of new data; a rate-based mechanism would likely smooth this out. (b) A router-centric TCP would send as before, but would receive (presumably a steady stream of) feedback packets from the routers. All routers would have to participate, perhaps through a connection-oriented packet-delivery model. TCP’s mechanisms for inferring congestion from changes in RTT would all go away. TCP might still receive some feedback from the receiver about its rate, but the receiver would only do so as a “router” of data to an application; this is where flow control would take place. 3. For Ethernet, throughput with N stations is 5/(N/2 + 5) = 10/(N + 10); to send one useful packet we require N/2 slots to acquire the channel and 5 slots to transmit. On average, a waiting station has to wait for about half the others to transmit first, so with N stations the delay is the time it takes for N/2 to transmit; combining this with a transmission time of N/2 + 5 this gives a total delay of N/2× (N/2 + 5) = N(N + 10)/4. Finally, power is throughput/delay = 40/N(N + 10)2. Graphs are below. 75 Chapter 6 78 Packet size flow Fi 1 100 1 100 2 100 1 200 3 100 1 300 4 100 1 400 5 190 2 190 6 200 2 390 7 110 3 110 8 50 3 170 We now send in increasing order of Fi: Packet 1, Packet 7, Packet 8, Packet 5, Packet 2, Packet 3, Packet 6, Packet 4. (b) To give flow 2 a weight of 4 we divide each of its Fi by 4, i.e. Fi = Fi−1 + Pi/4; again we are using the fact that there is no waiting. Packet size flow weighted Fi 1 100 1 100 2 100 1 200 3 100 1 300 4 100 1 400 5 190 2 47.5 6 200 2 97.5 7 110 3 110 8 50 3 170 Transmitting in increasing order of the weighted Fi we send as follows: Packet 5, Packet 6, Packet 1, Packet 7, Packet 8, Packet 2, Packet 3, Packet 4. 11. The answer is in the book. 12. (a) The advantage would be that the dropped packets are the resource hogs, in terms of buffer space consumed over time. One drawback is the need to recompute cost whenever the queue advances. (b) Suppose the queue contains three packets. The first has size 5, the second has size 15, and the third has size 5. Using the sum of the sizes of the earlier packets as the measure of time remaining, the cost of the third packet is 5 × 20 = 100, and the cost of the (larger) second is 15 × 5 = 75. (We have avoided the issue here of whether the first packet should always have cost 0, which might be mathematically correct but is arguably a misleading interpretation.) (c) We again measure cost in terms of size; i.e. we assume it takes 1 time unit to transmit 1 size unit. A packet of size 3 arrives at T=0, with the queue such that the packet will be sent at T=5. A packet of size 1 arrives right after. At T=0 the costs are 3 × 5 = 15 and 1 × 8 = 8. At T=3 the costs are 3 × 2 = 6 and 1 × 5 = 5. Chapter 6 79 At T=4 the costs are 3×1 = 3 and 1×4 = 4; cost ranks have now reversed. At T=5 the costs are 0 and 3. 13. (a) With round-robin service, we will alternate one telnet packet with each ftp packet, causing telnet to have dismal throughput. (b) With FQ, we send roughly equal volumes of data for each flow. There are about 552/41 ≈ 13.5 telnet packets per ftp packet, so we now send 13.5 telnet packets per ftp packet. This is better. (c) We now send 512 telnet packets per ftp packet. This excessively penalizes ftp. Note that with the standard Nagle algorithm a backed-up telnet would not in fact send each character in its own packet. 14. In light of the complexity of the solution here, instructors may wish to consider limiting the exercise to those packets arriving before, say, T=6. (a) For the ith arriving packet on a given flow we calculate its estimated finish- ing time Fi by the formula Fi = max{Ai, Fi−1}+1, where the clock used to measure the arrival times Ai runs slow by a factor equal to the number of active queues. The Ai clock is global; the sequence of Fi’s calculated as above is local to each flow. A helpful observation here is that packets arrive and are sent at integral wallclock times. The following table lists all events by wallclock time. We identify packets by their flow and arrival time; thus, packet A4 is the packet that arrives on flow A at wallclock time 4, i.e. the third packet. The last three columns are the queues for each flow for the subsequent time interval, including the packet currently being transmitted. The number of such active queues determines the amount by which Ai is incremented on the subsequent line. Multiple packets appear on the same line if their Fi values are all the same; the Fi values are in italic when Fi = Fi−1 + 1 (versus Fi = Ai + 1). We decide ties in the order flow A, flow B, flow C. In fact, the only ties are between flows A and C; furthermore, every time we transmit an A packet we have a C packet tied with the same Fi. Chapter 6 80 Wallclock Ai arrivals Fi sent A’s queue B’s queue C’s queue 1 1.0 A1,C1 2.0 A1 A1 C1 2 1.5 B2 2.5 C1 A2 B2 C1,C2 A2,C2 3.0 3 1.833 C3 4.0 B2 A2 B2 C2,C3 4 2.166 A4 4.0 A2 A2,A4 C2,C3 5 2.666 C5 5.0 C2 A4 C2,C3,C5 6 3.166 A6 5.0 A4 A4,A6 B6 C3,C5,C6 B6 4.166 C6 6.0 7 3.5 A7 6.0 C3 A6,A7 B6 C3,C5,C6,C7 C7 7.0 8 3.833 B8 5.166 B6 A6,A7 B6,B8 C5,C6,C7,C8 C8 8.0 9 4.166 A9 7.0 A6 A6,A7,A9 B8 C5,C6,C7,C8 10 4.5 A10 8.0 C5 A7,A9,A10 B8 C5,C6,C7,C8 11 4.833 B11 6.166 B8 A7,A9,A10 B8,B11 C6,C7,C8 12 5.166 B12 7.166 A7 A7,A9,A10 B11,B12 C6,C7,C8 13 5.5 C6 A9,A10 B11,B12 C6,C7,C8 14 5.833 B11 A9,A10 B11,B12 C7,C8 15 6.166 B15 8.166 A9 A9,A10 B12,B15 C7,C8 16 C7 A10 B12,B15 C7,C8 17 B12 A10 B12,B15 C8 18 A10 A10 B15 C8 19 C8 B15 C8 20 B15 B15 (b) For weighted fair queuing we have, for flow C, Fi = max{Ai, Fi−1} + 0.5 For flows A and B, Fi is as before. Here is the table corresponding to the one above. Chapter 6 83 Let N = CongestionWindow/MSS, the window size measured in segments. The goal of the original formula was so that after N segments arrived the net increment would be MSS, making the increment for one MSS-sized segment MSS/N . If instead we receive an ACK acknowledging an arbitrary Amoun- tACKed, we should thus expand the window by Increment = AmountACKed/N = (AmountACKed × MSS)/CongestionWindow 20. We may still lose a batch of packets, or else the window size is small enough that three subsequent packets aren’t sent before the timeout. Fast retransmit needs to receive three duplicate ACKs before it will retransmit a packet. If so many packets are lost (or the window size is so small) that not even three duplicate ACKs make it back to the sender, then the mechanism cannot be activated, and a timeout will occur. 21. We will assume in this exercise and the following two that when TCP encoun- ters a timeout it reverts to stop-and-wait as the outstanding lost packets in the existing window get retransmitted one at a time, and that the slow start phase begins only when the existing window is fully acknowledged. In particular, once one timeout and retransmission is pending, subsequent timeouts of later packets are suppressed or ignored until the earlier acknowledgment is received. Such timeouts are still shown in the tables below, but no action is taken. We will let Data N denote the Nth packet; Ack N here denotes the acknowledg- ment for data up through and including data N. (a) Here is the table of events with TimeOut = 2 sec. There is no idle time on the R–B link. Time A recvs A sends R sends cwnd size 0 Data0 Data0 1 1 Ack0 Data1,2 Data1 2 2 Ack1 Data3,4 (4 dropped) Data2 3 3 Ack2 Data5,6 (6 dropped) Data3 4 4 Ack3/timeout4 Data 4 Data5 1 5 Ack3/timeout5&6 Data4 1 6 Ack5 Data 6 Data6 1 7 Ack 6 Data7,8 (slow start) Data7 2 (b) With TimeOut = 3 sec, we have the following. Again nothing is transmitted at T=6 because ack 4 has not yet been received. Chapter 6 84 Time A recvs A sends R sends cwnd size 0 Data0 Data0 1 1 Ack0 Data1,2 Data1 2 2 Ack1 Data3,4 (4 dropped) Data2 3 3 Ack2 Data5,6 (6 dropped) Data3 4 4 Ack3 Data7,8 (8 dropped) Data5 5 5 Ack3/timeout4 Data4 Data7 1 6 Ack3/timeout5&6 Data4 1 7 Ack5/timeout7&8 Data6 Data6 1 8 Ack7 Data8 Data8 1 9 Ack8 Data9,10 (slow start) Data9 2 22. We follow the conventions and notation of the preceding exercise. Although the first packet is lost at T=4, it wouldn’t have been transmitted until T=8 and its loss isn’t detected until T=10. During the final few seconds the outstanding losses in the existing window are made up, at which point slow start would be invoked. A recvs cwnd A sends R sending/R’s queue Ack # size Data T=0 1 1 1/ T=1 1 2 2,3 2/3 T=2 2 3 4,5 3/4,5 T=3 3 4 6,7 4/5,6,7 T=4 4 5 8,9 5/6,7,8 9 lost T=5 5 6 10,11 6/7,8,10 11 lost T=6 6 7 12,13 7/8,10,12 13 lost T=7 7 8 14,15 8/10,12,14 15 lost T=8 8 9 16,17 10/12,14,16 17 lost T=9 8 9 12/14,16 T=10 8 9 9 14/16,9 2nd duplicate Ack8 T=11 8 16/9 T=12 8 9/ T=13 10 11 11/ B gets 9 T=14 12 13 13/ T=15 14 15 15/ T=16 16 17 17/ T=17 17 2 18,19 18/19 slow start 23. R’s queue size is irrelevant because the R-B link changed from having a band- width delay to having a propagation delay only. That implies that packets leave R as soon as they arrive and hence no queue can develop. The problem now be- comes rather trivial compared to the two previous questions. Because no queue can develop at the router, packets will not be dropped, so the window continues to grow each RTT. In reality this scenario could happen but would ultimately be limited by the advertised window of the connection. Note that the question is somewhat confusingly worded—it says that 2 packets Chapter 6 85 take one second to send, but since this is propagation delay rather than bandwidth delay, any number of packets can be sent in one second. Notation and conventions are again as in #21 above. A recvs cwnd A sends data # T=0 1 1 T=1 Ack1 2 2,3 T=2 Ack3 4 4,5,6,7 T=3 Ack7 8 8–15 T=4 Ack15 16 16–31 T=5 Ack31 32 32–63 T=6 Ack63 64 64–127 T=7 Ack127 128 127–255 T=8 Ack255 256 255–511 24. With a full queue of size N, it takes an idle period on the sender’s part of N+1 seconds for R1’s queue to empty and link idling to occur. If the connection is maintained for any length of time with CongestionWindow=N, no losses occur but EstimatedRTT converges to N. At this point, if a packet is lost the timeout of 2×N means an idle stretch of length 2N − (N+1) = N−1. With fast retransmit, this idling would not occur. 25. The router is able in principle to determine the actual number of bytes outstand- ing in the connection at any time, by examining sequence and acknowledgment numbers. This we can take to be the congestion window except for immediately after when the latter decreases. The host is complying with slow start at startup if only one more packet is out- standing than the number of ACKs received. This is straightforward to measure. Slow start after a coarse-grained timeout is trickier. The main problem is that the router has no way to know when such a timeout occurs; the TCP might have in- ferred a lost packet by some other means. We may, however, on occasion be able to rule out three duplicate ACKs, or even two, which means that a retransmission might be inferred to represent a timeout. After any packet is retransmitted, however, we should see the congestion window fall at least in half. This amounts to verifying multiplicative decrease, though, not slow start. 26. Using ACKs in this manner allow very rapid increase and control over Conges- tionWindow. Stefan Savage suggests requiring ACKS to include a nonce as a solution. That is, ACKs must include information from that data which is being ACKed to be valid. 27. Slow start is active up to about 0.5 sec on startup. At that time a packet is sent that is lost; this loss results in a coarse-grained timeout at T=1.9. At that point slow start is again invoked, but this time TCP changes to the linear- increase phase of congestion avoidance before the congestion window gets large Chapter 6 88 -100 0 100 200 300 400 500 600 700 800 900 1000 1100 3rd dup ACK data ack retransmitted data resume stop sending -200 -300 -400 -500 -600 -700 -800 -100 0 100 200 300 400 500 600 700 800 900 1000 1100 3rd dup ACK data ack retransmitted data resume stop sending -200 -300 -400 -500 -600 -700 -800 29(a) 29(b) window forward window forward window forward window forward 32. We might alternate between congestion-free backoff and heavy congestion, mov- ing from the former to the latter in as little as 1 RTT. Moving from congestion back to no congestion unfortunately tends not to be so rapid. TCP Reno also oscillates between congestion and non-congestion, but the peri- ods of non-congestion are considerably longer. 33. Marking a packet allows the endpoints to adjust to congestion more efficiently— they may be able to avoid losses (and timeouts) altogether by slowing their send- ing rates. However, transport protocols must be modified to understand and ac- count for the congestion bit. Dropping packets leads to timeouts, and therefore may be less efficient, but current protocols (such as TCP) need not be modified to use RED. Also, dropping is a way to rein in an ill-behaved sender. 34. (a) We have TempP = MaxP × AvgLen−MinThreshold MaxThreshold−MinThreshold . AvgLen is halfway between MinThreshold and MaxThreshold, which implies that the fraction here is 1/2 and so TempP = MaxP/2 = 0.005. We now have Pcount = TempP/(1 − count × TempP) = 1/(200−count). For count=1 this is 1/199; for count=100 it is 1/100. Chapter 6 89 (b) Evaluating the product (1 − P1) × · · · × (1 − P50) gives 198 199 × 197 198 × 196 197 × · · · × 150 151 × 149 150 which all telescopes down to 149/199, or 0.7487. 35. The answer is in the book. 36. The difference between MaxThreshold and MinThreshold should be large enough to accommodate the average increase in the queue length in one RTT; with TCP we expect the queue length to double in one RTT, at least during slow start, and hence want MaxThreshold to be at least twice MinThreshold. MinThresh- old should also be set at a high enough value so that we extract maximum link utilization. If MaxThreshold is too large, however, we lose the advantages of maintaining a small queue size; excess packets will simply spend time waiting. 37. Only when the average queue length exceeds MaxThreshold are packets au- tomatically dropped. If the average queue length is less than MaxThreshold, incoming packets may be queued even if the real queue length becomes larger than MaxThreshold. The router must be able to handle this possibility. 38. It is easier to allocate resources for an application that can precisely state its needs, than for an application whose needs vary over some range. Bursts con- sume resources, and are hard to plan for. 39. Between MinThreshold and MaxThreshold we are using the drop probability as a signaling mechanism; a small value here is sufficient for the purpose and a larger value simply leads to multiple packets dropped per TCP window, which tends to lead to unnecessarily small window sizes. Above MaxThreshold we are no longer signaling the sender. There is no logical continuity intended between these phases. 40. The bit allows for incremental deployment, in which some endpoints respond to congestion marks and some do not. Without this bit, ECN-enabled routers would mark packets during congestion rather than dropping them, but some (presum- ably older, not updated) endpoints would not recognize the mark, and hence would not back off during congestion, crowding out the ECN-compliant end- points, which would then have the incentive to ignore ECN marks as well. The result could actually be congestion collapse as in the pre-congestion-controlled Internet. 41. (a) Assume the TCP connection has run long enough for a full window to be outstanding (which may never happen if the first link is the slowest). We first note that each data packet triggers the sending of exactly one ACK, and each ACK (because the window size is constant) triggers the sending of exactly one data packet. We will show that two consecutive RTT-sized intervals contain the same number of transmissions. Consider one designated packet, P1, and let the Chapter 6 90 first RTT interval be from just before P1 is sent to just before P1’s ACK, A1, arrives. Let P2 be the data packet triggered by the arrival of A1, let A2 be the ACK for P2, and let the second interval be from just before the sending of P2 to just before the receipt of A2. Let N be the number of segments sent within the first interval, i.e., counting P1 but not P2. Then, because packets don’t cross, this is the number of ACKs received during the second RTT interval, and these ACKs trigger the sending of exactly N segments during the second interval as well. (b) The following shows a window size of four, but only two packets sent per RTT once the steady state is reached. It is based on an underlying topology A—R—B, where the A–R link has infinite bandwidth and the R–B link sends one packet per second each way. We thus have RTT=2 sec; in any 2-second interval beginning on or after T=2 we send only two packets. T=0 send data[1] through data[4] T=1 data[1] arrives at destination; ACK[1] starts back T=2 receive ACK[1], send data[5] T=3 receive ACK[2], send data[6] T=4 receive ACK[3], send data[7] The extra packets are, of course, piling up at the intermediate router. 42. The first time a timed packet takes the doubled RTT, TCP Vegas still sends one windowful and so measures an ActualRate = CongestionWindow/RTTnew of half of what it had been, and thus about half (or less) of ExpectedRate. We then have Diff = ExpectedRate−ActualRate ≈ (1/2)×ExpectedRate, which is relatively large (and, in particular, larger than β), so TCP Vegas starts reducing CongestionWindow linearly. This process stops when Diff is much closer to 0; that is, when CongestionWindow has shrunk by a factor close to two. The ultimate effect is that we underestimate the usable congestion window by almost a factor of two. 43. (a) If we send 1 packet, then in either case we see a 1 sec RTT. If we send a burst of 10 packets, though, then in the first case ACKs are sent back at 1 sec intervals; the last packet has a measured RTT of 10 sec. The second case gives a 1 sec RTT for the first packet and a 2 sec RTT for the last. The technique of packet-pairs, sending multiple instances of two consecu- tive packets right after one another and analyzing the minimum time differ- ence between their ACKs, achieves the same effect; indeed, packet-pair is sometimes thought of as a technique to find the minimum path bandwidth. In the first case, the two ACKs of a pair will always be 1 second apart; in the second case, the two ACKs will sometimes be only 100 ms apart. (b) In the first case, TCP Vegas will measure RTT = 3 as soon as there is a full window outstanding. This means ActualRate is down to 1 packet/sec. However, BaseRTT is 1 sec, and so ExpectedRate = CongestionWindow/BaseRTT is 3 packets/sec. Hence, Diff is 2 packets/sec, and CongestionWindow will be decreased. Chapter 6 93 traffic via a single FIFO queue; the only difference is that all traffic is now considered nonreserved. The state loss should thus make no difference. (b) If the router used weighted fair queuing to segregate reserved traffic, then a state loss may lead to considerable degradation in service, because the reserved traffic now is forced to compete on an equal footing with hoi polloi traffic. (c) Suppose new reservations from some third parties reach the router before the periodic refresh requests are received to renew the original reservations; if these new reservations use up all the reservable capacity the router may be forced to turn down the renewals. Solutions for Chapter 7 1. Each string is preceded by a count of its length; the array of salaries is preceded by a count of the number of elements. That leads to the following sequence of integers and ASCII characters being sent: 7 R I C H A R D 4376 8 D E C E M B E R 2 1998 3 80000 85000 90000 2 2. The answer is in the book. 5. Limited measurements suggest that, at least in one particular setting, use of htonl slows the array-converting loop down by about a factor of two. 6. The following measurements were made on a 300MHz Intel system, compiling with Microsoft’s Visual C++ 6.0 and optimizations turned off. We normalize to the case of a loop that repeatedly assigns the same integer variable to another: for (i=0;i<N;i++) {j=k} Replacing the loop body above with j=htonl(k) made the loop take about 2.9 times longer. The following homemade byte-swapping code took about 3.7 times longer: char * p = (char *) & k; char * q = (char *) & j; q[0]=p[3]; q[1]=p[2]; q[2]=p[1]; q[3]=p[0]; For comparison, replacing the loop body with an array copy A[i]=B[i] took about 2.8 times longer. 7. ASN.1 encodings are as follows: INT 4 101 INT 4 10120 INT 4 16909060 8. The answer is in the book. 9. Here are the encodings. 101 be 00000000 00000000 00000000 01100101 101 le 01100101 00000000 00000000 00000000 10120 be 00000000 00000000 00100111 10001000 10120 le 10001000 00100111 00000000 00000000 16909060 be 00000001 00000010 00000011 00000100 16909060 le 00000100 00000011 00000010 00000001 For more on big-endian versus little-endian we quote Jonathan Swift, writing in Gulliver’s Travels: 94 ...Which two mighty powers have, as I was going to tell you, been en- gaged in a most obstinate war for six and thirty moons past. It began upon the following occasion. It is allowed on all hands, that the primitive way of breaking eggs before we eat them, was upon the larger end: but his present Majesty’s grandfather, while he was a boy, going to eat an egg, and break- ing it according to the ancient practice, happened to cut one of his fingers. Whereupon the Emperor his father published an edict, commanding all his subjects, upon great penalties, to break the smaller end of their eggs. The people so highly resented this law, that our histories tell us there have been six rebellions raised on that account.... Many hundred large volumes have been published upon this controversy: but the books of the Big-Endians have been long forbidden, and the whole party rendered incapable by law of hold- ing employments. 10. The answer is in the book. 11. The problem is that we don’t know whether the RPCVersion field is in big- endian or little-endian format until after we extract it, but we need this informa- tion to decide on which extraction to do. It would be possible to work around this problem provided that among all the version IDs assigned, the big-endian representation of one ID never happened to be identical to the little-endian representation of another. This would be the case if, for example, future versions of XDR continued to use big-endian format for the RPCVersion field, but not necessarily elsewhere. 12. It is often possible to do a better job of compressing the data if one knows some- thing about the type of the data. This applies even to lossless compression; it is particularly true if lossy compression can be contemplated. Once encoded in a message and handed to the encoding layer, all the data looks alike, and only a generic, lossless compression algorithm can be applied. 13. [The DEC-20 was perhaps the best-known example of 36-bit architecture.] Incoming 32-bit integers are no problem; neither are outbound character strings. Outbound integers could either be sent as 64-bit integers, or else could lose the high-order bits (with or without notification to the sender). For inbound strings, one approach might be to strip them to 7 bits by default, make a flag available to indicate whether any of the eighth bits had been set, and, if so, make available a lossless mechanism (perhaps one byte per word) of re-reading the data. 14. Here is a C++ solution, in which we make netint⇒int an automatic conversion. To avoid potential ambiguity, we make use of the explicit keyword in the con- structor converting int to netint, so that this does not also become an automatic conversion. (Note that the ambiguity would require additional code to realize.) To support assignment netint = int, we introduce an assignment operator. class netint { public: 95
Docsity logo



Copyright © 2024 Ladybird Srl - Via Leonardo da Vinci 16, 10126, Torino, Italy - VAT 10816460017 - All rights reserved