Notes on Asynchronous I/O

来源:百度文库 编辑:神马文学网 时间:2024/04/27 19:11:53

Notes on the Asynchronous I/Oimplementation

(November 2008)

Fixed Thread Pool

An asynchronous channel group associated with a fixed threadpool of size N , submits N tasks that wait on I/Oor completion events from the kernel. Each task simply dequeues anevent, does any necessary I/O completion, and then dispatchesdirectly to the user's completion handler that consumes the result.When the completion handler terminates normally then the taskreturns back to waiting on a next event. If the completion handlerterminates due to an uncaught error or runtime exception then thetask terminates and is immediately replaced by a new task. This isdepicted in the following diagram:

Fixed thread pool

This configuration is relatively simple and delivers goodperformance for suitably designed applications. Note that it doesnot support the creation of threads on-demand or trimming back ofthe thread pool when idle. It is also not suitable for applicationswith completion handler implementations that block indefinitely; ifall threads are blocked in completion handlers then I/O eventscannot be serviced (forcing the operating system to queue acceptedconnections for example). Tuning requires choosing an appropriatevalue for N .

User-supplied Thread Pool

An asynchronous channel group associated with a user-suppliedthread pool submits tasks to the thread pool that simply invoke theuser's completion handler. I/O and completion events from thekernel are handled by one or more internal threads thatare not visible to the user application. This configuration isdepicted in the following diagram:

User-supplied thread pool

This configuration works with most thread pools (cached orfixed) with the following exceptions:

  1. The thread pool must support unbounded queueing.
  2. The thread that invokes the execute method must never execute the task directly. That is, internal threads do not invoke completion handlers.
  3. On Windows, the thread pool keep alive must be disabled. This restriction arises because I/O operations are tied to the initiating thread by the kernel. If a thread terminates then outstanding I/O operations that it has initiated may be aborted.

This configuration delivers good performance despite the hand-off per I/O operation. When combined with a threadpool that creates threads on demand, it is suitable for use withapplications that have completion handlers that occasionally needto block for long periods (or indefinitely). The value of M , the number of internal threads, is not exposed in theAPI and requires a system property to configure (default is 1).

Default Thread Pool

Simpler applications that do not create their own asynchronouschannel group will use the default group that has an associatedthread pool that is created automatically. This thread pool is ahybrid of the above configurations. It is a cached thread pool thatcreates threads on demand (as it is may be shared by differentapplications or libraries that use completion handlers that invokeblocking operations).

As with the fixed thread pool configuration it has Nthreads that dequeue events and dispatch directly to the user'scompletion handler. The value of N defaults to the numberof hardware threads but may be configured by a system property. Inaddition to N threads, there is one additional internal thread that dequeues events and submits tasks tothe thread pool to invoke completion handlers. This internal threadensures that the system doesn't stall when all of the fixed threadsare blocked, or otherwise busy, executing completion handlers.

What happens when an I/O operation completes immediately?

When an I/O operation completes immediately then the API allowsfor the completion handler to be invoked directly by the initiatingthread if the initiating thread itself is one of the pooledthreads. This creates the possibility that there may be severalcompletion handlers on a thread's stack. The following diagramdepicts a thread stack where a read or write method has completedimmediately and the completion handler invoked directly. Thecompletion handler, in turn, initiates another I/O operation thatcompletes immediately and so its completion handler is invokeddirectly, and so on.

Thread stack

By default, the implementation allows up to 16 I/O operations tocomplete directly on the initiating thread before requiring thatall completion handlers on the thread stack terminate. This policyhelps to avoid stack overflow and also starvation thatcould arise if a thread initiates many I/O operations that completeimmediately. This policy, and the maximum number of completionhandler frames allowed on a thread stack is configured by a systemproperty where required. An addition to the API, in the future, mayallow an application to specify how I/O operations that completeimmediately be handled.

Direct Buffers

The asynchronous I/O implementation is optimized for use withdirect buffers. As with SocketChannels, all I/O operations are doneusing direct buffers. If an application initiates an I/O operationwith a non-direct buffer then the buffer is transparentlysubstituted with a direct buffer by the implementation.

By default, the maximum memory that may be allocated to directbuffers is equal to the maximum java heap size (Runtime.maxMemory).This may be configured, where required, using theMaxDirectMemorySize VM option (eg:-XX:MaxDirectMemorySize=128m).

The MBean browser in jconsole can be used to monitor theresources associated with direct buffers.