[Horwitz02] Section 11.11 Tuning Network Performance

来源:百度文库 编辑:神马文学网 时间:2024/04/20 23:09:34

Tuning Network Performance

Even the most experiencednetwork professionals struggle to master all of the techniques ofnetwork performance tuning. Even so, just performing a few basic tuningtasks can greatly increase the performance and reliability of yournetwork communications. The following sections discuss some of the mostbasic network performance tuning tasks.

Hard-coding Duplex Modes

As you learnedin the previous section, full-duplex communications can decreaselatency on a network connection by allowing simultaneous two-waycommunications between two interfaces. However, full-duplex can be usedonly in switched environments where there is only one peer on the samenetwork segment. Machines connected to hubs are relegated tohalf-duplex mode.

Youshould not let autonegotiation decide the duplex for you in aproduction environment; it is just too unreliable. Duplex mismatchescaused by botched autonegotiation are not uncommon in multiplearchitecture environments, and they can cause network errors and highlatency—problems you want to avoid at all costs. The solution is tohard-code the proper duplex mode into your switches and servers todisable autonegotiation altogether.

On a Cisco switch running IOS 12.0, you can hard-code the duplex as follows:

switch#conf t
Enter configuration commands, one per line. End with CNTL/Z.
switch(config)#int Fa0/16
switch(config-if)#duplex full
switch(config-if)#

Solarisis a bit more complicated (as usual): You must disable all of the otherpossible modes and enable only the speed and duplex combinations thatyou want. You can accomplish this in/etc/system, as follows; the disabling takes effect upon reboot (you can usendd to perform these operations without rebooting, but the settings will not persist across reboots):

# Full Duplex 100 Mb
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100fdx_cap=1

# Half Duplex 100 Mb
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100fdx_cap=0
set hme:hme_adv_100hdx_cap=1

# Full Duplex 10 Mb
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100fdx_cap=0
set hme:hme_adv_100hdx_cap=0
set hme:hme_adv_10fdx_cap=1
set hme:hme_adv_10hdx_cap=0

# Half Duplex 10 Mb
set hme:hme_adv_autoneg_cap=0
set hme:hme_adv_100fdx_cap=0
set hme:hme_adv_100hdx_cap=0
set hme:hme_adv_10fdx_cap=0
set hme:hme_adv_10hdx_cap=1

You can configure a NIC in OpenBSD to be full duplex withifconfig, as follows, but only if your card supports it.

# ifconfig xl0 media 100baseTX mediaopt full-duplex

In order to persist across reboots, you must also configure this in/etc/hostname.INT, whereINT is the name of your network interface. For example, in/etc/hostname.xl0,

Code View:Scroll/Show All
inet 10.1.1.1 0xffffff00 10.1.1.255 media '100baseTX' mediaopt 'full-duplex'



Thesettings for Linux NICs are vendor-dependent, and there is no genericmethod to hard-code the duplex. See your NIC vendor's documentation for more details.

Prioritizing Important Traffic

A WAN connectionlike a T1 or T3 is usually not limited to carrying just one protocol. Atypical T1 to the Internet might carry well over ten individualInternet protocols at any one time during peak usage, including HTTP,SMTP, and DNS. A T3 between two data centers might carry moreapplication-specific traffic, such as Oracle SQL*Net or NFS. In eithercase some types of traffic are probably more important than others, andyou should prioritize them to ensure their timely transport.

Ingeneral, you can prioritize network traffic into four categories. Thehighest priority traffic is usually that of interactive applications,such as Telnet or SSH, as their users expect speedy responses everytime a key is pressed. These are also the protocols you use to log into remote servers, so it is important that they function efficiently atall times.

Anothertype of traffic is that of supporting your technology infrastructure,including DNS. Without DNS, your servers and others querying your nameservers cannot resolve names into IP addresses, and all communicationsto named hosts will stall. These types of protocols should beprioritized just below interactive traffic.

Nextin the priority list are your application protocols. If running OracleSQL queries over a WAN connection is critical to your applications, youshould prioritize SQL*Net on port 1521. Below these applicationprotocols lies everything else—protocols that aren't important to yourorganization or that aren't time-sensitive. Routers usually lumpunspecified protocols into this default pool.

Prioritize Traffic on the Correct Router

On a private WAN link such as a T1 between your office and data center, priorities should be set on the router that sends most of the traffic. This ensures that no traffic crosses the WAN link before it should.


An e-commerce company with several Web applications might prioritize its traffic as follows:

  1. Telnet(TCP 23), SSH(TCP 22)

  2. DNS(TCP,UDP 53), SNMP(UDP 161)

  3. HTTP(TCP 80), HTTPS(TCP 443)

  4. Other traffic

This order would be represented on a Cisco router running IOS 12.0 as follows:

queue-list 1 protocol ip 1 tcp 22
queue-list 1 protocol ip 1 tcp telnet
queue-list 1 protocol ip 2 udp domain
queue-list 1 protocol ip 2 tcp domain
queue-list 1 protocol ip 2 tcp 161
queue-list 1 protocol ip 3 tcp www
queue-list 1 protocol ip 3 tcp 443
queue-list 1 default 4
queue-list 1 queue 1 limit 40
queue-list 1 queue 2 limit 80
queue-list 1 queue 3 byte-count 2000 limit 120
queue-list 1 queue 4 byte-count 3000 limit 160

Real-World Example: Sluggish SSH

Programmers at a company with a T1 line from the office to the data center complained to the system administrator that they were not able to log in via SSH to any of the servers at the data center. Upon further investigation, the administrator saw almost 100% utilization on the T1, all in Oracle SQL*Net traffic on port 1521. A long-running query returning hundreds of megabytes of data was clogging the T1, eclipsing all other traffic with sheer volume, including SSH. The administrator quickly remedied the problem by setting up priorities on the router to give the interactive SSH protocol priority over other bandwidth-hungry protocols like SQL*Net.


Tuning TCP Timers

TCP is a statefulprotocol, and as such each side is responsible for maintaining theconnection with the other side. Various timers in the Unix kernel areemployed to assist in this process, and tuning some of them can providea big boost in capacity and performance. The Keepalive and TIME_WAITtimers are two of the most commonly used of these devices, as discussedin the sections that follow.

Keepalive Timer

If a client crashes oris otherwise interrupted with an open TCP connection to a server, theconnection is not immediately broken because the client did notexplicitly close it. This connection would remain open indefinitely ifnot for the keepalive timer, which specifies how long these one-sided connections can remain open.

OnSolaris the default value of the keepalive timer is 7,200,000milliseconds, or 2 hours. On a busy server with thousands of clients,if each broken connection warrants a 2-hour wait for its port to bereclaimed, the server may run out of available ports, causing newincoming connections to be refused. A more reasonable value on a busyserver like this would be 5 minutes. You can usendd to set the timer in milliseconds, as follows:

# ndd –set /dev/tcp tcp_keepalive_interval 300000

TIME_WAIT Interval

The TIME_WAIT interval determinesthe amount of time the kernel will wait before reusing a port from aclosed TCP connection. The reason behind this delay is dataintegrity—what happens if the server closes the connection before theclient is ready? The client may still be in the process of sending datato the server; if the server port were reused, that data might go tothe wrong process, causing all kinds of trouble.

TheTIME_WAIT interval is a good thing, but it also creates an interestingproblem. If a large number of connections arrive at a server and areclosed successfully, the ports associated with those connections mustwait to be reused for the TIME_WAIT interval. Considering the defaultinterval is 4 minutes on Solaris, this is a long time to wait forhundreds or even thousands of a limited number of ports to be reused.You can tune TIME_WAIT to 1 or 2 minutes withndd, as follows:

# ndd –set /dev/tcp tcp_time_wait_interval 60000

  • Create Bookmark (Key: b)Create Bookmark
  • Create Note or Tag (Key: t)Create Note or Tag
  • Download (Key: d)Download
  • Email This Page (Key: e)Email This Page
  • PrintPrint
  • Html View (Key: h)Html View
  • Zoom Out (Key: -)Zoom Out
  • Zoom In (Key: +)Zoom In
  • Toggle to Full Screen (Key: f)
  • Previous (Key: p)Previous
  • Next (Key: n)Next

Related Content

CF duplexing performance enhancements
From: z/OS Version 1 Release 9 Implementation

Coupling Facility duplexing
From: ABCs of z/OS System Programming Volume 11