[Bernstein09] 10.7. Legacy TP Monitors

来源:百度文库 编辑:神马文学网 时间:2024/05/02 06:56:14

10.7. Legacy TP Monitors

Transactionalmiddleware products began with the development and deployment ofdedicated hardware and software systems, designed specifically for usein processing transactions. The first such system, called SABRE, wasdeveloped by IBM and American Airlines in the late 1950s and early1960s as an automated way of reserving seats on airplanes. Later it wasadapted for use by other airlines. The operating system layer became anIBM product called ACP (Airline Control Program) with PARS (ProgrammedAirline Reservation System) as one of the applications. An offshootnamed PARS-Financial was used in the finance industry. The productintroduced many useful innovations, such as system performance modelingprior to system construction, replicated writes, fast restart (at most5 seconds), intelligent terminal controller, client failover, andworkload migration. However, in one respect, it was quite bare-bones bytoday’s standards: ACID transaction semantics was implemented by theapplication. Many years later, acknowledging its use outside theairline industry, IBM renamed it TPF (Transaction Processing Facility).TPF is still used for airline reservations some 40 years later, and byseveral financial institutions, for example to process credit cardpayments.

In the late1960s, IBM released two TP monitor products with much morefunctionality, IMS (Information Management System) and CICS (CustomerInformation Control System). CICS was developed before IMS, by IBM’sfield engineering group, initially for one specific customer, but wasnot released as a product until after IMS. All of these TP monitorswere designed for the mainframe environment and typically were used onmachines dedicated to a single application.

Duringthe minicomputer area, roughly from 1980 to 2000, a new generation ofdistributed TP monitors emerged. Operating systems for these machineswere designed to work well with many more processes than mainframesystems. So these TP monitors make heavier use of processes, whichenable them to scale up an application by moving processes onto moremachines. This is in contrast to earlier mainframe TP monitors, wherescaling up usually involves buying a larger mainframe machine.

LegacyTP monitors typically include a lot of product-specific components.They are part of the TP monitor because they’re essential for theconstruction and deployment of TP applications but were not part of theunderlying platform at the time the TP monitor product was developed.Examples include specialized resource managers such as databasemanagement systems, indexed files, and queues; specialized presentationtechnology for vendor-specific terminals; program development tools;and system management environments. Today, many of these components areavailable as general-purpose technology, such as general-purpose systemmanagement tools, display devices, and database systems.

Manyapplications still exist that are based on legacy TP monitors, becausethe applications work well and the cost of rewriting the applicationexceeds the expected savings in moving to commodity technologies. Thissection describes some of these products—ones that you may stillencounter, for example in the context of a legacy modernization,interoperability, or SOA project.

Onechallenge with modernization, interoperability, and SOA projects isfinding a good point of entry for an external call into the legacyapplication. Many older applications are not very modular, due to poorinitial design, tight integration of components to meet stringentperformance requirements, or many changes made to the application overthe years. Sometimes this means that the applications themselves haveto be modified to complete the project, and it can be difficult findingprogrammers who are qualified to work in the legacy computingenvironment.

Thelegacy TP monitors described in this section all are popular enoughthat interfaces to modern TP environments have been built for them,such as Web Services wrappers, Java EE connectors, CORBA interfaces,and message queue adapters. In some cases capabilities such as thesehave been added directly into the legacy TP monitors themselves. And aswe have already seen, both the .NET Framework and Java EE transactionalmiddleware environments include capabilities specifically designed forintegration with these (and other) legacy environments.

CICS Transaction Server

CICSis IBM’s most popular legacy TP monitor. It pioneered many of thetechnologies and approaches found in modern transactional middlewareproducts, including two-phase commit and transactional RPC.

Developedin 1968 to improve the efficiency of mainframe operating systemenvironments, CICS is now a family of products running on the VSE andz/OS operating systems. A version of the CICS product called TX Seriesruns on the UNIX and Windows operating systems. Although there is somevariation of features between the different implementations, theproducts all support essentially the same “command level” API.

Commands are embedded using the prefixEXEC CICSin any of the supported languages: COBOL, PL/I, Assembler, C/C++, andJava. The commands are translated by a precompiler into CICS functioncalls to carry out the requested operations. For example, the followingare commands to send a form to a terminal, receive a form from aterminal, and link to another CICS program:

EXEC CICS SEND...
EXEC CICS RECEIVE...
EXEC CICS LINK...

IBM andvarious third-party vendors offer toolkits for enabling Web Serviceaccess of CICS applications, allowing them to participate in SOA-basedapplications. These convert existing COBOL data types to XML andgenerate SOAP messages and WSDL interfaces from the COBOL metadata(often called the Copy Book or COMMAREA). CICS supports HTTP as a transport, which allows SOAP and plain XML messages to be exchanged with CICS transactions.

Otherapproaches to legacy integration employ an intermediate node, such as aUNIX or Windows system running the TX Series version of CICS. Theintermediate node runs Web Services or other formats and protocols,which are converted into legacy formats and protocols. Deploying theintegration solution on an intermediate machine can avoid having tomodify the mainframe CICS application or install additional integrationsoftware on the mainframe.

MostCICS remote communication today uses TCP/IP and HTTP. LU6.2 gatewaysare still in use but IIOP, RMI, and WebSphere MQ protocols are moretypical. IBM also has ported WebSphere Application Server to themainframe, which can invoke EJB Session Beans hosted in CICS.

System Architecture

CICS offers a process-like abstraction called a region.A region is an address space that can execute multiple threads. CICSimplements its own middleware-level threading abstraction (see Section 2.3).A region can own resources, such as terminals, programs,communications, and databases. The failure of an application is scopedto a region; that is, when a failure occurs, it affects only theregion. The unit of distribution likewise is a region.

Each CICS resource type is described by a table, whose entries list resources of that type (see Figure 10.42).In early mainframe versions of CICS, it was common practice to have allresources of a TP application be owned by a single region. Earlycommunications mechanisms were limited and had high overhead, so thiswas the most efficient structure. It amounts to running all tiers of aTP application in a single process. Today, communications capabilitiesare much improved, so the recommended practice is to partition anapplication into three regions that correspond roughly to the multitiertransactional middleware model: a terminal region (the front-endprogram), an application region (request controller), and a data region(transaction server).

Figure 10.42. A CICS Region. Aregion provides multithreading and controls application resources,including devices, transaction programs, data resources, andcommunications links.


ACICS region can communicate with another region and with an externalapplication using a dynamic program link (DPL), an RPC-style mechanismspecific to CICS. Inter-region communication points offer goodopportunities for integration with applications based on othertechnologies, including through an external client.

Ina terminal region, the terminal table identifies the terminals attachedto the region. When a user logs in, the user is authenticated via apassword; later accesses to data resources are controlled via an accesscontrol list external to CICS. The region can support geographicalentitlement by optionally checking that the user is authorized tooperate from the given terminal within a given time period and tofulfill a given role (such as the master terminal operator).

“Transaction”is the CICS term for a request. Each request entered by a user includesa four-character transaction code. Using the region’s transactiontable, the request can be sent to a region that can process it. InCICS, this is called transaction routing.In our model, it corresponds to using a request controller to route arequest from a front-end program to a transaction server.

Requeststhat arrive at a region are classified based on request type, user, andterminal ID. Once the request is scheduled, the program table checkswhether or not this request type can be processed locally, and whetherthe user and terminal are authorized to run this request. It then loadsthe application program if it’s not already loaded. (Multiple users canshare the same copy of a transaction program.) Then CICS creates a task,which is the execution of a transaction program for a given user andrequest, assigns it an execution thread, and automatically starts atransaction using the chained transaction model. Each execution threadis reclaimed for use by another program when the reply message is sent.

Front-End Program

Beforethe widespread adoption of PCs, most CICS systems were accessed by IBM3270 block-mode terminals, which send and receive a screen of data at atime. This is still a popular mode of access, sometimes with 3270emulation software running on the PC or other displays that conform tothe 3270’s data stream communications protocol. Thus, one way for anexternal system to communicate with a CICS application is to emulate a3270 terminal and communicate using the 3270 protocol. A functioncalled the external programming interface (EPI) provides this support.EPI is also used to connect a variety of external clients.

CICShas a built-in forms manager, called Basic Mapping Services (BMS),which maps between a device-oriented view of the data andprogram-oriented data structures. BMS can be used to interact with 3270terminals and othertypes of devices. Typical Web Service enablement and otherinteroperability tools support COMMAREA direct calls (DPL style), 3270emulation, and BMS emulation.

TP Communications

CICS offers applications a variety of ways to call remote programs. We have already encountered EPI. Some others are:

  • Distributed Program Link (DPL), which is a programming model similar to a remote procedure call. DPL is synchronous, that is, the application waits for the results (see Figure 10.43).

    Figure 10.43. Communications from CICS Clients and External Interfaces. CICS provides multiple communications options for distributed processing and interoperability with external platforms, including the integration of browsers, PCs, and UNIX Servers.

  • Multiregion Operation (MRO) and Inter-Systems Communication (ISC), which are available on CICS VSE and zOS, are transport mechanisms that enable communications between regions running on the same mainframe (i.e., transaction routing and DPL can be implemented using MRO or ISC).

  • Distributed Transaction Processing (DTP), which is the interface to a peer-to-peer communications transport. It uses the LU6.2 protocol, which is a session-based protocol that associates a transaction with each session using the chained transaction model. It propagates transaction context across send-message operations and includes a two-phase commit protocol. LU6.2 is part of IBM’s proprietary network architecture called SNA (System Network Architecture).

TheCOMMAREA is the standard place in main memory to put information topass via inter-region communications facilities such as DPL. WebServices toolkits for CICS also use the COMMAREA to obtain messagedefinitions. COMMAREA data types typically are converted to XML datatypes for use in Web Services.

TheCICS Universal Client product from IBM includes programming librariesfor Visual Basic, C/C++, and COBOL, and supports both TCP/IP andSNA-based communication protocols.

Database Access

CICSinitially was implemented on mainframe operating systems that did notsupport efficient multithreading. Thus, multithreading was implementedby CICS. Recall from Section 2.3that such an implementation must not allow application programs toissue blocking operations, since they would delay all the threads inthe process.Therefore, applications issued all of their database operations toCICS, which could thereby switch threads if a database operation wouldordinarily cause the process to be blocked.

Earlyversions of CICS did most of their database processing through COBOLindexed files, accessing the VSAM (Virtual Sequential Access Method)file store. CICS and VSAM include services for buffer management, blockmanagement, indexed access, and optional logging for rollback. CICS wasamong the first TP systems to offer remote data access, using afacility called function shipping, which allows an application to access a remote VSAM file.

Later,support was added for IMS databases via the DL/I interface and, morerecently, for relational databases including IBM’s DB2 family via theSQL interface. Implementations of all continue to be found inproduction.

IMS

IMS(Information Management System) is another popular TP monitor productfrom IBM. IMS was designed with Rockwell and Caterpillar for the Apollospace program. IMS’s challenge was to inventory the very largebill-of-materials for the Saturn V moon rocket and Apollo spacevehicle. Thus, its design originally centered around its powerfulhierarchical database.

IMSwas released in 1968 for IBM mainframes. It was among the firstproducts to offer online database and transaction processing at a timewhen nearly all data processing was done in batch. IMS runs in bothonline and batch modes, allowing the incremental conversion of anapplication from batch to online. Like many TP applications, most IMSapplications still contain a large portion of batch programming.

IMSconsists of both a TP monitor called IMS Transaction Manager (TM) and ahierarchical-style database system called IMS Database Manager (DB).The TP monitor and database systems are independent and can beconfigured separately, which allows considerable flexibility. Forexample, the IMS DB can be used with the CICS TP monitor, or IMS TM canbe used with DB2, IBM’s relational database product. Multiple IMSsystems can be configured for distributed processing environments andas standby systems for high availability. In addition, IMS supportsmultiple optimizations for fast performance.

IMSTM is among the first queued messaging systems dedicated to TP. LikeCICS, IMS TM can be accessed from devices, PCs, and UNIX systemsoutside the mainframe environment. It has specific external accesspoints for XML, Web Services, Java EE, and BPEL. A variety ofthird-party products provide support for Web Service enablement andinteroperability with IMS, such as Orbix, WebSphere MQ, and WebSphereApplication Server. IMS DB includes support for XML data mapping, JDBCdrivers, and XML Query.

Basic System Architecture

Applications run in a system,which contains the application program itself and the facilitiesrequired to support the application. In contrast to CICS, which managesits own address space, an IMS application runs in an operating systemprocess and accesses TP monitor services such as threading,dispatching, and program loading through a call interface to a systemlibrary (instead of using an embedded command style language). Anexample appears in Figure 10.44. Multiple applications can run in separate processes to take advantage of zSeries symmetric multiprocessing.

Figure 10.44. IMS COBOL Example. ThePROCEDURE DIVISION starts with a loop that executes until no moremessages are found on the queue. The first time a request is issued forthis program, IMS loads it and keeps it loaded until all requests forthe program are completed.
PROCEDURE DIVISION.

ENTRY-LINKAGE.
ENTRY 'DLITCBL' USING I-O-PCB DB-PCB.

MAIN PROGRAM.
PERFORM GET-MSG-ROUTINE THRU GET-MESSAGE-ROUTINE-EXIT
UNITL I-O-STATUS-CODE EQUAL NO-MORE-MESSAGES.
GO BACK

The basic IMS TM model is queued. An end user inputs some data on a device (see Figure 10.45).IMS extracts the data, adds a transaction ID, formats the input into arequest message, and enqueues it on the input queue. IMS then loads theprogram associated with the transaction, if it is not already running.Then IMS dequeues the input message (starting the transaction),translates the transaction ID into the transaction program name, androutes the message to the application, which executes the transactionprogram using the input data. Dequeuing a message starts a transaction.When the transaction program completes, the application enqueues areply message to the output queue associated with the input device orprogram. There are options to enqueue the reply message to a differentdevice, another application, or a specific user, instead of or inaddition to the input device.

Figure 10.45. Basic IMS System Architecture. Requestand reply messages move between a device and an application via queues.Various gateways connect IMS to external communications systems andresource managers.


IMSTM also offers an optimization called Fast Path, which essentiallyallows the application to bypass the queuing system (i.e., requestcontroller) and send simple request messages directly from the deviceto the transaction program, using a predefined mapping that is keptresident in main memory. Requests identify the fast path transactionprograms, which are preloaded and ready to process the requests. Thefast path can also use a special main memory database, with advancedconcurrency control features, as described in Section 6.5 on Hot Spot locking.

Aninterface called the Open Transaction Manager Access (OTMA), allowsmultiple communications managers to connect to IMS. Using OTMA, IMSreceives transaction requests from any source on the network and routesresponses back.

Securityoptions include device security (which controls the entry of IMScommands), password security, and access control on transactions,commands, control regions, and application programs.

Front-End Program

IMSTM includes a built-in forms manager, called Message Format Service(MFS), and an optional Screen Definition Facility (SDF) that definesand formats IBM 3270 terminal screens and collects the input data andtransaction ID for the request message.

TP Communications

IMSTM is based on a queued TP model, rather than a direct TP model such asRPC. This has enhanced recovery compared to most TP monitors, at thecost of extra transactions and I/O, as described in Chapter 4.

Applicationsaccess the input and output queues using calls to retrieve inputmessages, to return output messages to the sending device, and to sendoutput messages to other application programs and devices. MFS assistsin translating messages between device format (originally the terminalformat) and the application program format.

Extensionsto IMS allow it to accept a remote call from a PC or workstation,access an IMS database via SQL, use APPC for LU6.2 conversationalcommunications with CICS, access the message queue interface (MQI) tointeroperate with WebSphere MQ, and accept calls from a CORBA wrapper,EJB, or Web Service. IMS also supports TCP/IP sockets for LU6.2-styleconversations. And IMS supports IBM’s Intersystem Communication (ISC),which allows communication among multiple IMS systems or between an IMSsystem and a CICS region; and Multiple Systems Coupling (MSC), whichallows communication among multiple IMS systems.

Database Access

Thenative database system that comes with IMS is based on a hierarchicalmodel, which preceded the development of relational database systems.The higher performance of the hierarchical model is one of the reasonsIMS-based applications are still in production. Today’s IMSapplications can also use DB2, in addition to or in place of the IMS DBdatabase. The database access runs using the application’s thread. Adata propagation utility is available that moves data updates from IMSDB to DB2, or vice versa, automatically. Java library support allowsIMS DB to invoke stored procedures hosted in DB2. Other tools allowdata to be moved between IMS and non-IBM relational databases.

Tuxedo

Tuxedois a legacy TP monitor from Oracle. Tuxedo runs on a variety of UNIXand Windows platforms. Oracle owns the rights for Tuxedo, as do a fewresellers who customize the product for their own platforms (e.g.,UNISYS and Bull). AT&T’s Bell Laboratories created Tuxedo in 1984,primarily to service telecommunication applications, which remains itslargest market. The Tuxedo design is based on IMS, and originally wasintended to replace IMS at the US telephone companies (who are largeIMS users).

Tuxedo supports several options for interoperability with external systems, including Java EE, Web Services, and CORBA.

Tuxedowas the basis for many of the X/Open DTP standards, including the DTPmodel itself, XA, TX, and XATMI. Tuxedo also implements OTS via itsCORBA API.

System Architecture

Tuxedo provides two main APIs. One is called the Application Transaction Monitor Interface (ATMI),which is a collection of runtime services that are called directly by aC, C++, or COBOL application. The other is the CORBA C++ API. Tuxedoruntime services provide support for communications, distributedtransactions, and system management. In contrast to the full-featuredCICS API, ATMI relies heavily on UNIX system libraries and externaldatabase system services for filling some TP application requirements.

The ATMI functiontpcall() invokes a Tuxedo service. A typicaltpcall is shown in the following example:

tpcall( "TRANS", (char *)reqfb, 0, (char **)&reqfb, (long *)&reqlen, );

Tuxedoservices can be developed using C, C++, or COBOL. Native Tuxedo APIclients can be developed using C, C++, COBOL, and Java. Tuxedo servicescan be written using Java when they are hosted on the Oracle WebLogicServer using its domain gateway feature. And CORBA-compliant Tuxedo APIclients and servers can be developed using C++.

Tuxedo’s services are implemented using a shared memory area called the bulletin board, which contains configuration information (similar to CICS tables) that supports many TP monitor functions (see Figure 10.46).For example, it contains transaction service names, a mapping oftransaction service names to transaction server addresses,parameter-based routing information, and configuration options fortransaction services and servers.

Figure 10.46. Tuxedo Client/Server Architecture. Requests are routed to the correct server process using the bulletin board, whether on a local or remote node.


Ina distributed environment, one system at a time is designated as havingthe master bulletin board. The bulletin board at each node is loadedinto shared memory from a configuration file when Tuxedo boots. Changesto the master bulletin board are written to the configuration file,which is propagated at boot time if it has changed since the last boot.The master copy of the bulletin board is propagated at the boot of anew machine. Other nodes reload the file to see the updated state.Servers and services can be added, modified, or removed dynamically.

ATuxedo system consists of client and server processes. Clientstypically provide presentation services to users. That is, theyinteract with devices that issue requests and do not accesstransactional resource managers. Unlike CICS and IMS, a Tuxedo clientis allowed to issue a Start operation, which may optionally beforwarded to the server (request controller or transaction server) toactually start the transaction.

Tuxedo systems are configured in a domain, which defines the scopeof computers in the network that participate in a given application.The domain concept essentially represents an administrative boundaryaround participating client and server processes in a network andrepresents the scopeof shared access to bulletin board metadata. A domain also can befederated with other domains to increase the scalability of largeTuxedo installations.

Althoughthe bulletin board typically is used for the request controller, aTuxedo server can perform the functions of request controller,transaction server, or both. This flexibility allows an application tobe structured into a multitier architecture, but doesn’t require it.

In Tuxedo, a serviceis the name of a server interface. When a client calls a service, thebulletin board forwards the call to a server that supports the service,similar to how IMS routes a queued message to a transaction program.The server might be on a different node than the client, in which casethe bulletin board routes the request via a bridge to the other node.When a service becomes available, the server advertises the service byposting the service name to the bulletin board. Each server process hasa main memory queue that is used for incoming messages (see Figure 10.47).A call to a service causes a message to be put into its queue. As inIMS, the server dequeues messages sent by the client and does therequested work, optionally in priority order. When it’s done, theserver sends a reply message to a message queue associated with theclient, which includes a status that tells whether the call completedsuccessfully or resulted in an error. The client dequeues the messageand processes the reply.

Figure 10.47. Tuxedo Request Message Flow. Requests are routed between client and server processes using input and output queues.


The Tuxedo API offers programmers explicit transaction control primitives—for example,tpbegin,tpcommit, andtpabort.

Flagscan be set in the client program and in the configuration file to placethe execution of transaction programs in automatic, or implicit,transaction mode. In implicit transaction mode, a transaction isstarted automatically when the transaction program receives controlfrom the front-end program (or client program, in Tuxedo terminology),and is automatically committed if the execution of the server programis successful. Ifthe client program starts a transaction, automatic transaction modedetects the existing transaction and includes the called transactionprogram in the same transaction. An execution error (that is, a returnof bad status) results in an automatic transaction abort. This issimilar to the way CICS handles transactions for DPL-invoked programs.

Anexplicit programming model option is asynchronous commit processing,where an application can continue without waiting for the second phaseto complete in a two-phase commit operation.

Errorhandling is at the application level. The program examines a globalvariable to get an error message, and checks this error status afterevery call, as in IMS programming.

Front-End Program

Somelegacy applications still use Tuxedo’s Data Entry System forms package,originally designed for use on character cell terminals. The input onsuch a form contains the desired transaction type’s service name and atyped buffer that contains the input data. It also includes flags thatselect various options, such as automatically starting a transactionfor the server being called and automatically retrying after anoperating system interrupt signal.

Nativecommunication messages are constructed using Tuxedo’s FieldManipulation Language (FML). This creates typed buffers, which aresimilar to the CICS COMMAREA.

Tuxedooffers several options for external client access, including the /WSpackage for UNIX and PC clients, web browser and Web Services clients,CORBA clients, and a Java client for use with Oracle’s WebLogicapplication server. Tuxedo also supports interoperability withJMS-based message queues.

TP Communications

Processesusing the ATMI protocol can communicate using a choice of peer-to-peermessage passing, remote procedure calls, or an event posting mechanism.An RPC can be synchronous (i.e., the application waits for the results)or asynchronous (i.e., the application asks sometime later for theresults). Using peer-to-peer message passing, the programmer canestablish a conversational session between the front-end program andthe transaction server and exchange messages in an application-definedorder, rather than in the strict request-reply style of RPC. Asubscription service puts events on the bulletin board, and an eventposting mechanism allows a server to raise an event, which sends anunsolicited message to one or more clients (in the case of multipleclients this represents a type of broadcast facility).

Serversdeveloped using the CORBA API can communicate using the RMI/IIOPprotocol. Tuxedo servers can interact bidirectionally with an HTTP WebService through Tuxedo’s SALT (Services Architecture Leveraging Tuxedo)gateway. Tuxedo also includes a variety of mainframe connectivityoptions, including TCP/IP, SNA, and OSI TP-based protocols withspecific support for invoking CICS and IMS transactions.

Whena server calls another server, the caller can specify whether thecallee runs in the same transaction or outside of the transactioncontext.

Database Access

TUXEDOhas a built-in transaction manager that supports two-phase commit. Itcan use any XA-compliant resource manager, such as Oracle, Sybase, DB2,or SQL Server.

ACMS

ACMS(Application Control and Management System) is a legacy TP monitor fromHP. ACMS was developed by Digital Equipment Corporation in the early1980s as part of an effort to gain market share in commercial applications. (Digital’s initial strength was in scientific computing.) ACMS runs on the HP OpenVMS operating system.

ACMSwas originally released in 1984 as part of the integrated VAXInformation Architecture product set along with Rdb (relationaldatabase system), DBMS (CODASYL database system), TDMS (original formssystem), DECforms (a newer forms system), CDD (Common Data Dictionary),and Datatrieve (query and report writer for record-oriented files anddatabases). ACMS pioneered many transactional RPC and abstractionconcepts, and remains a popular TP monitor for the HP OpenVMSenvironment.

System Architecture

ACMSuses a three-process TP monitor model in which each of the three tiersis mapped to a different operating system process, very similar to ourmultitier architecture: front-end program, request controller, andtransaction server (see Figure 10.48). The processes communicate via a proprietary RPC.

Figure 10.48. ACMS Three-Process Model. Remoteprocedure calls communicate among predefined processes tuned forspecific types of application work. The Task Definition Languagedefines the workflow and controls transactions.


ACMSapplications accept a request for the execution of a transaction from aterminal or other display device connected to the process running thefront-end program, called the Command Process.It is multithreaded to handle multiple devices concurrently. Thefront-end program sends a request message to the request controllerprocess, called the Task Server. (A taskis a program in a request controller that controls a request.) Therequest controller is also multithreaded to handle multiple requestsconcurrently. The request controller calls a procedure running in thetransaction server, which ACMS calls the Procedure Server.Since the transaction server is single-threaded, it is typicallydeployed as a server class consisting of multiple server processes.ACMS monitors the workload on transaction servers to determine whetherenough server process instances are active to handle the applicationworkload. If there are too few, it automatically starts another serverinstance. If a server is idle for too long, ACMS automatically deletesit to conserve system resources.

Incontrast to CICS, IMS, and Tuxedo, ACMS has a specialized compiledlanguage, the Task Definition Language (TDL), for specifying requestcontrol. It supports features that were required by the ACMS model butnot present in traditional imperative languages in the early 1980s whenACMS was designed, such as RPC, multithreading, transaction control,and structured exception handling. TDL is designed to work inconjunction with TDMS and DECforms for menu and forms handling and withany OpenVMS language for transaction server development. It wasstandardized by X/Open as the Structured Transaction DefinitionLanguage (STDL). ACMS was also the basis of the X/Open TransactionalRPC specification (TxRPC). Figure 10.49 contains an example of TDL calls to transaction server procedures.

Figure 10.49. ACMS TDL Example for the Transfer Task. TheACMS task definition declares the data to be passed to a procedureusing record definitions and can call multiple procedures within thesame transaction block.
REPLACE TASK TRANSFER

WORKSPACES ARE CUSTOMER_WKSP,
ACCOUNTS_WKSP;

TASK ARGUMENTS ARE CUSTOMER_WKSP WITH ACCESS READ,
ACCOUNTS_WKSP WITH ACCESS MODIFY;
BLOCK
...
BLOCK WORK WITH TRANSACTION IS

PROCESSING WORK IS
CALL WITHDRAW_PROC USING CUSTOMER_WKSP, ACCOUNT_WKSP;
CALL DEPOSIT_PROC USING CUSTOMER_WKSP, ACCOUNT_WKSP;

EXCHANGE WORK IS ...

ACTION IS ...

END BLOCK WORK;

END BLOCK;

END DEFINITION;

Whenan exception occurs, control is passed to the ACTION portion of thetask. Certain exceptions automatically abort the transaction beforebranching to the exception handler, as in CICS or automatic transactionmode of Tuxedo. A single resource transaction can be started in theprocedure server.

ACMSoffers an open, call-level interface to its RPC, called the SystemsInterface (SI) API, for connecting specialized devices such as ATMs,gas pumps, and telecom switches. The SI also has been used to createclients external to ACMS, such as .NET clients, web browsers, and JavaEE clients.

TP Communications

Allprocess-to-process communication is via a proprietary RPC protocol,including calling a procedure in another process on the same machine.It is possible to change a local call (i.e., in the same process) to aremote call via a configuration change.

TDL includes an interface definition language called the task group.The TDL compiler uses the task group information to generate proxy andstub programs to be linked with the RPC caller and callee. The calleetypically would be a procedure server developed using any of theOpenVMS supported languages, such as COBOL, C, FORTRAN, Basic, Pascal,and Ada. This allows callers to use standard procedure call syntax,rather than explicitly constructing a specially-formatted buffer andthen passing it in the call (as in CICS and Tuxedo). Information aboutthe request, such as the security context and display identifier, isautomatically placed in hidden arguments and is forwarded transparentlyto the server, where it becomes part of the server’s context.

ACMSuses OpenVMS cluster technology to support high availability forapplications, by automatically redirecting an RPC from a failed node toa surviving node. It uses the OpenVMS transaction manager, DECdtm, fortwo-phase commit. It also uses the OpenVMS database, Rdb (now owned byOracle Corp), for automatic failover in an OpenVMS cluster. That is,the database is available from multiple nodes in the cluster, and theapplication can fail over automatically from a database connection onone machine to a database connection on another machine, using theOpenVMS lock manager. Using these mechanisms, ACMS is able to achievevery high levels of availability.

ACMShas been extended using a product called TP Ware that includes supportfor .NET Framework clients, Java clients, web browsers, and WebServices clients running on the Windows operating system. A productcalled the Web Services Integration Toolkit, running on OpenVMS,exposes ACMS tasks as EJBs and Web Services. ACMS server procedures caninclude HP’s APPC/LU6.2 gateway for interoperability with CICS-basedapplications. TP Ware basically replaces the command process in thethree-tier architecture with web browser, .NET, and Java clients,providing libraries and an API to directly invoke a task in the taskserver, bypassing the Command Process.

Database Access

Transactionserver programs directly access any database or resource manager.Certain specialized databases are directly accessible from TDL. ACMSincludes a queue manager for durable request queue operations.

Ifa transaction is bracketed within a TDL program (in a requestcontroller), then ACMS controls the commitment activity using DECdtm.If it is bracketed within the transaction server, then ACMS isuninvolved in the commitment process. This is useful for databasesystems that are not integrated with DECdtm, or that offer specializedoptions that can only be set in the transaction bracket statements.

Pathway TS/MP

Pathwaywith NonStop Transaction Services (TS/MP) is another legacy TP monitorfrom HP. It was developed originally by Tandem Computers and releasedin the mid-1980s as a TP development platform for Tandem’s Guardianoperating system running on their fault-tolerant platform.

Tandemlater teased apart its operating system into a kernel portion, theNonStop Kernel (NSK), with two layers on top: one that supports theGuardian API, and one that supports a POSIX (UNIX) API, called OpenSystem Services (OSS). OSS supports a native port of Tuxedo and anonnative port of a Java EE application server (Oracle’s WebLogic).Pathway and Tandem in general, was a pioneer of high availability,fault tolerance, and data replication technologies.

Pathwayis based on a client/server process structure and a transactionabstraction. NonStop TS/MP provides server process management (e.g.,load balancing and automatic server restart). Transaction management isimplemented using infrastructure called the NonStop TransactionManagement Facility (TMF). This TP infrastructure supports allapplication environments, including Pathway, NonStop Tuxedo, NonStopCORBA, NonStop JSP, NonStop SOAP, and NonStop Web Server. TS/MPrecently has been completely rearchitected using a new component calledApplication Cluster Services that extends server management and loadbalancing capabilities to the new generation of HP Integrity NonStopprocessors.

System Architecture

Pathway uses a two-process model to implement its client/server architecture, which is called requester/server (see Figure 10.50).The client is a multithreaded Terminal Control Program (TCP), whichhandles multiple simultaneous interactions with end users. It supportsboth front-end program and request controller functions. The TCPinterpretively executes programs written in Tandem’s COBOL dialect,SCREEN COBOL, which includes features for terminal handling andcommunication with single-threaded transaction servers. An example ofSCREEN COBOL is in Figure 10.51.Enhancements to the NonStop environment have allowed the development ofmultithreaded transaction servers. Similarly to ACMS, transactionservers execute compiled object code written in a standard languagewith embedded SQL and run in server classes. Supported languagesinclude C/C++, COBOL, Java, and TAL (Transaction Application Language,which is proprietary to HP NonStop).

Figure 10.50. Pathway Monitor Two-Process Model. TheTerminal Control Program interprets SCREEN COBOL programs to interactwith the display and format requests, and to call servers via RPC. Theservers access the Tandem resource managers. TCPs are implemented usingprocess pairs for fault-tolerance.


Figure 10.51. SCREEN COBOL Example. Theprogram accepts input from the display, begins a transaction, and sendsmessages to two servers, one locally for the debit operation and theother to a remote node for the credit operation.
PROCEDURE DIVISION.
000-BEGIN SECTION.
ACCEPT INPUT-MSG.
BEGIN-TRANSACTION.
MOVE ACCOUNT-ID OF INPUT-MSG TO ACCOUNT-ID OF DBCR-MSG.
MOVE AMOUNT OF INPUT-MSG TO AMOUNT OF DBCR-MSG.
SEND MESSAGE DBCR-MSG TO /LOCAL
REPLY CODE STATUS.
MOVE BALANCE OF DBCR-MSG TO BALANCE1 OF CONFIRM-MSG.
SEND MESSAGE DBCR-MSG TO /REMOTE
REPLY CODE STATUS.
END-TRANSACTION.

TheTCP interprets a SCREEN COBOL application program to display menus,paint and read a screen, validate the input data, and format a requestmessage with the name of the target server class. The applicationprogram then starts a transaction and executes a SEND command to issuean RPC to a transaction server in the server class named by the request.

TheRPC mechanism establishes a new link to a server in the requestedserver class, if it doesn’t already have one or if all existing linksare busy processing other requests. The server accepts the message anddoes the work of the request, accessing a database if appropriate. Whenthe server program completes, it sends a reply message to the TCP. TheTCP’s application program can invoke many such RPCs before forwardingthe reply to the terminal and committing the transaction. Finally, thereply message is displayed.

The Guardian operating system implements software fault-tolerance through process pairs, a mechanism by which a given operating system process has a second, shadow process as a backup to each primary process. Aconfiguration option tells Pathway to run each TCP as a process pair. Aserver monitoring feature called Pathmon, which is also implemented asa process pair, monitors Pathway servers and restarts them in the eventof a process or processor failure. The primary and backup processes ina process pair configuration run on different processors so that atleast one of them will survive any processor failure.

Atthe beginning of each transaction, Pathway checkpoints the displaycontext (essentially, the request), which means that it copies thisstate from the primary process to its backup process. It checkpointsagain just before commit (essentially, the reply). If the primary failsduring the transaction execution, the transaction aborts and the backupcan re-execute the transaction using the checkpointed display context,without asking the user to re-enter the data. If the transactionexecutes without any failures and commits, then the precommitcheckpoint replaces the start-of-transaction checkpoint, and can besent to the display. The checkpoints play a similar role to queueelements in queued TP. The NonStop process pair and checkpoint/restartcapability is unique in the TP industry.

Serversare typically stateless, which allows successive calls to a serverclass within the same transaction to be handled by different servers.Servers are automatically restarted in the event of process orprocessor failure.

Transactionsare managed by the TMF. Updates to data are logged to an audit file,from which TMF manages various types of recovery. There is one log pernode. TMF provides a system logging service for both itself (as atransaction manager), and for the NonStop resource managers (NonStopSQL and Enscribe). All updates by a transaction are written as a singlelog write, no matter how many resource managers are involved, therebyminimizing the number I/Os per transaction to improve performance andscalability.

NonStopresource managers provide fault tolerance through disk mirroring andhot backup, and provide upward scalability through data partitioningand parallel processing. System server processes typically run asprocess pairs to ensure high availability.

Front-End Program

Pathwaywas introduced in the days of low-function terminals. So its front-endprogram, TCP, supports terminal devices via a multithreaded process,where each thread maintains a context for a terminal and initiates arequest on behalf of the user. Later on, the TCP interface was openedup for access from PCs, workstations, and other devices, such as ATMs,gas pumps, and bar code readers.

Externalclient support has been added for web browsers, Web Services, .NET,CORBA, JMS, and Tuxedo using a set of special gateway processes thatreplace the TCP for modern display devices and interoperability solutions.A Web Services toolkit is available to generate a WSDL interface from aPathway interface so that a standard Web Services client can access aPathway server. Similarly, the NonStop JSP product, together with theNonStop Web Server product, supports direct access to Pathway serversfrom standard HTTP clients.

TP Communications

ANonStop system (or node) is a loosely-coupled cluster of processors,connected by a high-speed bus called ServerNet. Processors do not sharememory, but this architecture is supported by the common operatingsystem environment that provides high performance, availability, andscalability.

TheNonStop operating system uses a transactional interprocesscommunications mechanism based on the NonStop messaging system, betweenprocesses both on the same node and on remote nodes. The communicationmechanism is accessed using the PathSend API.

Database Access

TheNonStop environment includes an SQL-compliant resource manager calledNonStop SQL and a transactional file system called Enscribe. Bothresource managers support parallel processing and distributedprocessing features of the NonStop platform. When it was released inthe mid-1980s, NonStop SQL was the first distributed, parallelrelational database system product.

Mirroreddisks are supported for local backup, and the Remote Database Facility(RDF) supports a remote hot backup. RDF uses the process pairarchitecture to forward log records from the primary database to theremote replica, where another process pair applies the log records tothe database replica.

Multipleprocessors can execute separate SQL requests simultaneously or divide alarge single request for parallel processing on multiple processors.The resource managers support the standard locking and loggingapproaches described in Chapters 6 and 7,including record locking, relaxed isolation levels for improved readperformance, and logs for undo-redo recovery. Online reconfiguration issupported for such things as moving a partition or splitting an index.