U.S. patent application number 09/942215 was filed with the patent office on 2002-08-08 for transaction processing system having service level control capabilities.
Invention is credited to Sagawa, Nobutoshi.
Application Number | 20020107743 09/942215 |
Document ID | / |
Family ID | 18892769 |
Filed Date | 2002-08-08 |
United States Patent
Application |
20020107743 |
Kind Code |
A1 |
Sagawa, Nobutoshi |
August 8, 2002 |
Transaction processing system having service level control
capabilities
Abstract
There is provided a transaction processing system for providing
plural services according to service level contracts, the system
comprising: an SLA database for storing contract conditions defined
for each of the services provided; request queues for storing
processing requests sent from clients for the services provided
while putting the respective services into a particular order;
queuing condition detection module for obtaining waiting conditions
of the processing requests stored in the request queues; and a
scheduler for deciding priorities to the processing requests input
from the client to the transaction processing system by referring
to the contract conditions and the waiting conditions of the
processing requests.
Inventors: |
Sagawa, Nobutoshi; (Koganei,
JP) |
Correspondence
Address: |
ANTONELLI TERRY STOUT AND KRAUS
SUITE 1800
1300 NORTH SEVENTEENTH STREET
ARLINGTON
VA
22209
|
Family ID: |
18892769 |
Appl. No.: |
09/942215 |
Filed: |
August 30, 2001 |
Current U.S.
Class: |
705/17 |
Current CPC
Class: |
G06Q 20/204 20130101;
G06Q 10/10 20130101; G06Q 10/109 20130101 |
Class at
Publication: |
705/17 |
International
Class: |
G06F 017/60 |
Foreign Application Data
Date |
Code |
Application Number |
Feb 5, 2001 |
JP |
2001-028231 |
Claims
We claim:
1. A transaction processing system capable of providing one or more
services and connecting one or more clients to each service the
system provides, comprising: means for storing priority conditions
defined according to the services provided; and means of execution
prioritization for deciding execution priorities to processing
requests input from the clients to said transaction processing
system by referring to the priority conditions held or stored in
said priority condition storing means, such that processing is
carried out according to the decided priorities.
2. The system according to claim 1, wherein said means of execution
prioritization gives higher priorities to processing requests for
services the priority conditions of which are defined high in
priority.
3. A transaction processing system capable of providing one or more
services and connecting one or more clients to each service the
system provides, comprising: means for storing priority conditions
defined according to the services provided; queuing means for
storing processing requests sent from the clients for the services
provided while putting the respective services into a particular
order; means for obtaining waiting conditions of the stored process
requests from said queuing means; and means of execution
prioritization for deciding execution priorities to the processing
requests input from the clients to said transaction processing
system by referring to the priority conditions held or stored in
said priority condition storing means and waiting conditions
obtained, such that processing is carried out according to the
decided priorities.
4. The system according to claim 3, wherein said means for
obtaining waiting conditions obtains: the number of processing
requests that have been kept waiting in said queuing means; and the
time of arrival of each processing request that has been kept
waiting in said queuing means.
5. The system according to claim 4, wherein said means of execution
prioritization decides the execution priority by comparing
allowable waiting time defined for each provided service with the
time of arrival of the processing request concerned obtained by
said means for obtaining waiting conditions.
6. A transaction processing system capable of providing one or more
services and connecting one or more clients to each service the
system provides, comprising: means for storing an identifier or
identifiers of one or more execution modules constituting each
service; storage means for storing the execution module or modules;
and means for managing an update of each execution module on the
basis of the identifier, wherein when the execution module is
updated by said update managing means, the updated execution module
is placed to the storage means prior to starting the transaction
corresponding to the service.
7. The system according to claim 6, wherein said update managing
means exclusively performs an update of one or more execution
modules for each service and detection of the update of the
execution modules.
8. A transaction processing system capable of providing one or more
services and connecting one or more clients to each service the
system provides, comprising: queuing means for storing processing
requests sent from the clients for the services provided while
putting the respective services into a particular order; means for
obtaining waiting conditions of the process requests stored in said
queuing means; means for detecting transaction throughput to each
service; and means for allocating transaction processing processes
to the service, wherein said process allocating means decides the
allocation of processes to the service by referring to the process
request waiting conditions obtained and the transaction throughput
detected.
9. The system according to claim 8, wherein said process allocating
means increases the number of processes to be allocated as
processing requests stored in said queuing means increases, and
reduces the number of processes to be allocated as processing
requests stored in said queuing means decreases.
10. The system according to claim 8, wherein said process
allocating means allocates processes according to the priority to
be given to the service.
11. A program having a computer execute transaction processing
capable of providing one or more services and connecting one or
more clients to each of the services provided, comprising: means
for storing priority conditions defined according to the services
in a priority condition database; queuing means for storing, in a
queue or queues, processing requests sent from the clients for the
services provided while putting the respective services into a
particular order; means for obtaining waiting conditions of the
stored process requests stored in the queue or queues; means of
execution prioritization for deciding execution priorities to the
processing requests input from the clients to said transaction
processing by referring to the priority conditions and the waiting
conditions; and means for letting the computer execute transaction
processing according to the decided priorities.
12. A program having a computer execute transaction processing
capable of providing one or more services and connecting one or
more clients to each of the services provided, each service
constituted of one or more execution modules, comprising: means for
judging update conditions of an execution module or modules on the
basis of an identifier or identifiers of one or more execution
modules constituting the service; and means for placing updated
execution module or modules if nay to storage means prior to
starting the transaction or transactions corresponding to the
service.
Description
BACKGROUND OF THE INVENTION
[0001] 1. Field of the Invention
[0002] The present invention relates to a transaction processing
system, and in particular to implementation of transaction
processing in response to requests from plural customers.
[0003] 2. Description of the Related Art
[0004] The transaction system is a system for efficiently executing
a lot of processing requests in such a manner as to assure
consistency in a basic line of corporate information system such as
financial trading and ordering/order receiving. In general, a
client/server system is so constructed that a client (terminal)
issues a request and a server executes the main body of transaction
processing by accessing a database as required. A processing
program executing an actual transaction on the server is called
service.
[0005] The service providing side in the transaction system is
called a service provider. For example, in the retailing bank
business, ATMs or tellers are clients, and the basic system
including a customer's account database is a server. In this case,
the bank is the service provider, which provides services such as
withdrawal and deposit transactions.
[0006] Transaction processing middleware used on the server side is
a transaction monitor. The transaction monitor mainly takes the
following two parts.
[0007] (1) The transaction monitor receives processing requests
sent from clients and queues the processing requests by taking into
account request priorities and crowding levels on the server to
forward control of the respective requests to appropriate server
programs (service) one by one, thus making effective use of server
resources.
[0008] (2) The transaction monitor detects errors or faults caused
during execution of processing. If the processing has completed
successfully, it carries out result writing (committing) operation,
while if the processing has not completed successfully, it carries
out cancel (rollback) or re-run operation. Thus the transaction
monitor assures consistency of the transaction processing.
[0009] FIG. 18 shows a typical configuration of the transaction
monitor.
[0010] As shown, a transaction monitor 209 is located on a server
215, while client programs 201, 202 issuing processing requests are
located on client terminals 221, 222, respectively.
[0011] In general, the server 215 is a UNIX server, a mainframe
computer or the like, while the client terminal is a personal
computer, an ATM terminal or the like.
[0012] The transaction monitor 209 includes request queues 210 and
211, a scheduler 204 and a transaction execution module 207.
[0013] The processing request for a service is typically
transferred from the client 221 or 222 to the server 215 in the
form of a message (electronic text). Therefore, the transaction
monitor has a communication function module so that the transaction
monitor receives a processing message by controlling its own
communication function module.
[0014] The message received is stored in the transaction monitor
209 as a processing request. Since two or more requests are usually
kept waiting in the transaction monitor 209, the transaction
monitor 209 uses the queues 210, 211 as a First-In First-Out data
structure to store the requests in the order of input. The requests
stored are extracted from the queues 210, 211 in the order of
storage as soon as one of resources (CPU, memory etc.) in the
transaction execution module 207 becomes available, and processed
by corresponding service programs 206.
[0015] (Scheduling and Load Balancing)
[0016] Scheduling is to extract a request from a queue and move the
request to the execution of service program processing for the
request. Efficient scheduling is necessary to increase the
efficiency of the transaction processing system.
[0017] In particular, if there exist plural resources (processor,
server etc.) that provide services, processing efficiency depends a
lot on how to allocate the requests to the plural resources.
Allocating requests to the plural resources to increase the
efficiency of transaction processing is called load balancing.
Thereinafter, including both the scheduling above-mentioned and
load balancing operations, the entire allocating process of the
requests to the resources may be referred as "scheduling,".
[0018] As one approach to scheduling, a method of balancing load by
increasing or decreasing the number of processes for providing a
service is known. An outline of the method will be described with
reference to FIG. 19.
[0019] In FIG. 19, requests 301 to 309 are stored in a request
queue 300. These requests are supposed to be processed by processes
310 to 313 one by one. The term "process" is a unit of program to
be processed on a computer, and the unit is a combination of one
virtual address space, a program loaded on the space, data and a
CPU register indicative of an execution state of the program.
[0020] In the example of FIG. 19, the same transaction processing
program (service program) is loaded in all the processes. If free
spaces are available in the CPU of the computer, the number of
services to be provided concurrently is increased by increasing the
number of processes so that the utilization factor of the CPU can
be improved.
[0021] In other words, increasing or decreasing the number of
processes allocated to transactions make it possible to control
processing throughput to the transactions (the number of requests
to be processed in a unit time).
[0022] FIG. 19A shows a case where a very small number of requests
are stored in the queue 300. In this case, the transaction monitor
allocates a small number of processes (310, 311) to the service
concerned according to the number of requests.
[0023] FIG. 19B shows a case where a large number of requests
arrive and hence the number of requests queued in the queue 300
increases. In this case, the transaction monitor monitors
conditions in the queue to increase the number of processes to be
allocated to the service (310 to 313).
[0024] FIG. 19C shows a case where incoming messages are reduced
and the length of the queue becomes short. In this case, the
transaction monitor deallocates the idling process 313 from the
service and allocate it to another service or task. By associating
the length of the queue with the number of processes to be
allocated, it becomes possible to improve transaction efficiency
within a range of CPU resources.
[0025] And, in case that there are plural servers to be controlled
by the transaction monitor, a system shown in FIG. 20 is used for
balancing load among servers.
[0026] Suppose that there are three servers (420 to 422), and that
a queue 400 of one of the servers (server 420) becomes longer than
the other queues 401, 402 for reasons of server's processing
capacity, crowding level or the like. In this case, a processing
program 431 on the client side detects this state and controls
itself to send messages by priority to shorter queue server 421,
422. Thus the queues can be balanced in length among the plural
servers to improve the total throughput.
[0027] (Message Broker)
[0028] Example applications of the transaction monitor for a
further advanced multi-transaction processing system include a
message broker.
[0029] A normal transaction processing system has a one-to-one
correspondence between a message and a service, but a message
broker performs processing by passing one message among plural
services by recursively invoking. The message broker stores in the
transaction monitor a service flow (business flow), which
designates what services and in what sequence the services are
invoked for the message. The services to be invoked may be located
on the same server as the transaction monitor or another
independent stand-along server.
[0030] FIG. 21 shows a configuration of the message broker.
[0031] Client programs 501, 502 from which processing requests are
issued are located on client terminals, respectively. A transaction
monitor 509 is located on a transaction processing server.
[0032] Service programs A530 and B531 for providing business
services are loaded on different servers (or the same server). The
terminals and the servers are connected with each other through
message communications lines. The transaction monitor 509 includes
request queues 510, 511 and a scheduler 504 for deciding the
sequence of request processing.
[0033] Compared to the normal transaction processing system (FIG.
18), the message broker adds an extension to the transaction
execution module (207 in FIG. 18) to constitute a service flow
execution routine 507.
[0034] The service flow execution routine 507 manages the execution
of a service flow defined by the service provider, not just
initiate and execute a service program according to a message.
[0035] Since the message broker allows the execution of a service
flow on the transaction monitor, it can combine plural service
programs to construct more complicated service structure.
[0036] (Node Replacement During Operation)
[0037] In the message broker the service flow may often be altered
or changed due to an update or addition of business service. It is
undesirable to stop the entire system each time the service flow is
altered or changed. For this reason, a mechanism for changing only
the service flow without stopping the system operation is highly
required.
[0038] One method is to divide the processes executing the service
flow into two groups (active group and standby group). In this
case, the active group executes the unchanged flow while re-loading
a new service flow to the standby group. Upon completion of
loading, message routing is switched from the active group to the
standby group in the continuation of the system's operation.
[0039] Another method is to provide routing enable and disable
modes for each service node. In this case, a node to be replaced is
changed to routing disable mode, thereby prohibiting input of any
message to the node upon re-loading of a new service flow to the
node.
[0040] The above-mentioned transaction or message broker processing
systems are known from the following publications: Japanese Patent
Laid-Open Application No. 09-062624 (JP-A-09-062624) (Processing
System for On-line Transaction); Japanese Patent Laid-Open
Application No. 06-243077 (JP-A-06-243077) (Distributed Transaction
Processing System); Japanese Patent Laid-Open Application No.
08-063432 (JP-A-08-063432) (Transaction Batch Processing System in
Consideration of Priority); Japanese Patent Laid-Open Application
No. 06-052121 (JP-A-06-052121) (Batch processing-Real Time
Processing Sorting Type Transaction Processing System); Japanese
Patent Laid-Open Application No. 07-073143 (JP-A-07-073143) (Time
Band-Based Priority Control Transaction Processing System); and
Japanese Patent Laid-Open Application No. 10-040117
(JP-A-10-040117) (Task Control type On-line Transaction Processing
System for Maintaining High Response).
SUMMARY OF THE INVNETION
[0041] New business activities such as in a data center, which
perform contract outsourcing of systems of plural service providers
(or customers) and centralized control of computer resources to
improve the total processing efficiency, is growing steadily.
[0042] Such a data center is operated under service level
agreements (SLA) with service providers to bill the service
providers according to the computer resources used (the amount of
transaction, associated CPU operating time, data amount, etc.) and
service guaranty conditions. To reduce the billing, it is necessary
to execute more transactions with fewer computer resources
(investment).
[0043] In contrast, the above-mentioned conventional transaction
processing systems using a transaction monitor or message broker
are constructed on assumption that a single service provider
provides services to its clients alone. Therefore, these
conventional systems do not allow for common use of one transaction
processing system among plural service providers, and hence
coordination of transaction resources (computer resources) and
amounts of throughput among the plural service providers.
[0044] In other words, upon receiving transaction processing
requests from plural clients, the conventional systems cannot make
effective use of computer resources, which makes it difficult to
secure a sufficient amount of throughput for each client.
[0045] Further, the above-mentioned conventional message broker or
transaction monitor needs to be provided with an auxiliary process
group or routing closing means for updating the service flow due to
an update or addition of business services. In other words, the
conventional message broker or transaction monitor does not allow
for effective use of computer resources among plural clients, which
makes flexible operation difficult.
[0046] It is therefore an object of the present invention to
realize a transaction processing system suitable for providing
business services to plural service providers by enabling
transaction priority control and allocation control of computer
resources in consideration of the above-mentioned SLA.
[0047] A representative mode to be disclosed in this specification
is a transaction processing system comprising: means for holding or
storing priority conditions defined according to services the
transaction processing system provides; queuing means for storing
processing requests sent from clients for the services while
putting the respective services into a particular order; means for
obtaining waiting conditions of the stored process requests from
the queuing means; and means of execution prioritization for
deciding execution priorities to the processing requests input from
the clients to the transaction processing system by referring to
the priority conditions and the waiting conditions of the
processing requests.
[0048] It is preferable that the queuing means is provided with
plural queues each of which can store processing requests from each
customer or user to which corresponding service is provided. It is
also preferable that the means for storing priority conditions
contains priority conditions defined according to the type of
processing (service to be executed) and the customer or user.
[0049] Specifically, the transaction processing system further
comprises means for detecting throughput to a transaction to
control the allocation of computer resources to each service, and
means for allocating transaction processing processes to the
service, wherein the means for allocating processes decides the
allocation of processes to the service by referring to the process
request waiting conditions obtained and the transaction throughput
detected.
[0050] More specifically, the transaction processing system further
comprises means for storing an identifier or identifiers of one or
more execution modules constituting each service, and means for
managing an update of each execution module on the basis of the
identifier, whereby when the update managing means executes the
update of the execution module, the updated execution module is
placed (loaded) to storage means prior to starting the transaction
corresponding to the service.
[0051] As discussed above and according to the present invention,
the transaction processing system or message broker carries out
priority control to each service in consideration of priority
conditions defined according to the services the transaction
processing system provides, and processing request waiting
conditions obtained from the queuing means for storing processing
requests sent from clients for the services while putting the
respective services into a particular order.
[0052] The above configuration makes possible transaction
scheduling which meets the contract conditions for each service the
transaction processing system provides for each customer, and hence
real time processing of more on-line transactions with less
computer resources with maintaining the throughput guaranteed under
contract with the customer. Thus the reliability and performance of
the data center that integrally processes business transactions for
plural customers can be improved.
[0053] According to the present invention, the transaction
processing system further comprises means for detecting throughput
to a transaction corresponding to each service, and means for
allocating a transaction processing processes to the service,
wherein the means for allocating the processes decides the
allocation of processes to the service by referring to the process
request waiting conditions obtained and the transaction throughput
detected.
[0054] The above-mentioned configuration makes possible the
allocation of such processes as to meet the contract conditions for
each service the transaction processing system provides for each
customer, and hence real time processing with maintaining the
throughput guaranteed under contract with the customer. Thus the
reliability and performance of the data center that integrally
processes business transactions for plural customers can be
improved.
[0055] According to the present invention, the transaction
processing system further comprises means for storing an identifier
or identifiers of one or more execution modules constituting each
service, and means for managing an update of each execution module
on the basis of the identifier, wherein when the execution module
or modules have been updated by the update managing means, the
updated execution module or modules are placed in the storage means
prior to starting the transaction corresponding to the service.
[0056] In the above-mentioned configuration, when the execution
module or modules have been updated by the update managing means,
the updated execution module or modules are placed in the storage
means prior to starting the transaction corresponding to the
service, which makes possible an update or addition of business
services with maintaining the system operation. Thus the
flexibility and availability of the transaction processing system
can be improved. Further, since any auxiliary process group or
routing closing means does not need to be provided for updating the
execution modules, effective use of computer resources can be
realized.
BRIEF DESCRIPTION OF THE DRAWINGS
[0057] FIG. 1 is a block diagram showing a general structure of one
preferred embodiment according to the present invention.
[0058] FIG. 2 is a block diagram showing a hardware structure of
the embodiment according to the present invention.
[0059] FIG. 3 is a table showing an SLA database.
[0060] FIGS. 4A to 4D are tables showing a message dictionary, in
which
[0061] FIG. 4A shows a fixed part definition module and
[0062] FIGS. 4B to 4D show variable part definition modules.
[0063] FIGS. 5A and 5B are descriptive diagrams of a service flow,
in which
[0064] FIG. 5A shows a relationship between node and service
program, and
[0065] FIG. 5B shows a relationship among node name, node type,
input source, output destination and module.
[0066] FIG. 6 is a diagram showing data structure of a message.
[0067] FIG. 7 is a diagram for explaining a configuration of a
request queue.
[0068] FIG. 8 is a PAD diagram showing operations of the request
queue.
[0069] FIG. 9 is a PAD diagram showing operations of a queuing
condition detection module.
[0070] FIG. 10 is a PAD diagram showing operations of a
scheduler.
[0071] FIG. 11 is a diagram showing a configuration of process
management information.
[0072] FIG. 12 is a PAD diagram showing operations of a dynamic
loader.
[0073] FIG. 13 is a diagram showing a configuration of an execution
module condition table.
[0074] FIG. 14 is a PAD diagram showing detailed operations of the
dynamic loader.
[0075] FIG. 15 is a PAD diagram showing operations of an execution
module manager.
[0076] FIG. 16 is a block diagram showing a second embodiment
according to the present invention.
[0077] FIG. 17 is a PAD diagram showing operations of a process
management module according to the second embodiment of the present
invention.
[0078] FIG. 18 is a block diagram showing a conventional
transaction processing system.
[0079] FIGS. 19A to 19C are diagrams showing conventional process
number control, in which
[0080] FIG. 19A is a case where there exists one request,
[0081] FIG. 19B is a case where many requests are waiting, and
[0082] FIG. 19C is a case where the requests are reduced.
[0083] FIG. 20 is a diagram showing conventional priority
control.
[0084] FIG. 21 is a block diagram showing a conventional message
broker.
DESCRIPTION OF THE EMBODIMENTS
[0085] Hereinbelow, one preferred embodiment of the present
invention will be described with reference to the accompanying
drawings.
[0086] 1. Hardware Structure
[0087] FIG. 2 shows a hardware structure of a computer system
according to one preferred embodiment of the present invention. The
system is constructed of one or more client terminals 701, 702, one
or more transaction servers 703, and one or more service program
executing servers 704, 705. It should be noted that the same
computer may be used commonly for the transaction processing server
and the service program executing server.
[0088] The client terminals 701, 702 may be ATM (Automatic Teller
Machine) terminals or personal computers on which operating systems
such as Microsoft Windows or Linux can be run.
[0089] The transaction processing server 703 and the service
program executing servers 704, 705 are, for example, UNIX servers
like Hitachi 3500 series, Windows NT servers (trademark) like
Hitachi Flora (trademark) series, or mainframe general-purpose
computers like Hitachi MP series. Communication lines 710
connecting the clients and each server are, for example,
general-purpose networks such as the Ethernet. It should be noted
that the transaction processing server 703 and the service program
executing servers 704, 705 are equipped with storage means such as
memories or hard disks, not shown.
[0090] 2. General Structure of the Embodiment
[0091] Referring to FIG. 1, description will be made first about a
general structure of the embodiment before detailed description of
the embodiment.
[0092] Client programs 101, 102 are run on the client terminals
701, 702, respectively. The client programs provide interfaces with
terminal users in the system. It should be noted that the term
"client" denotes the customer-specific client terminal 701 or 702
to be connected to the transaction processing server 703.
[0093] The above-mentioned client programs correspond to ATM
control programs or client programs for personal computers. The
client programs may be Web browsers. Each client program builds up
a message in response to input from an end user to send the message
to a transaction monitor 120.
[0094] The transaction monitor 120 is a key feature of the
embodiment. Unlike the conventional transaction monitor 120, the
transaction monitor 120 of the embodiment can receive messages
(processing requests) to plural service providers (hereinafter,
also referred to as customers). It is assumed in FIG. 1 that the
number of service providers is two.
[0095] An SLA database (priority condition database) 113 stores
contract conditions (SLA) related to service levels (priority
conditions, allowable waiting time) under contract with each
service provider. For example, based on such contract contents that
"transactions for service provider A should be processed in 10
seconds or less," an allowable waiting time of 10 msec. and
priority U.L may be stored in the database.
[0096] A format of messages from each service provider is defined
in a message dictionary 114. For example, such a definition that
"10.sup.th to 20.sup.th bytes in a message from service provider A
describe customer account number" may be stored.
[0097] Definitions of a service flow for each service provider are
stored in a service flow definition module 115. A group of
execution modules corresponding to respective service nodes of the
service flow are stored in an executing module library 116.
[0098] A preprocessor 103 interprets a message from the client
server 101 or 102 to judge which service provider the message
belongs to.
[0099] Each of request queues 110, 110 is provided for each service
provider (each customer) that accesses the transaction monitor 120;
it stores requests sent to the service provider. Since it is
assumed in FIG. 1 that the number of service providers is two,
there exist two request queues 110, 111.
[0100] A queuing condition detection module 112 monitors the
request queues 110, 111 to obtain their conditions (the number of
waiting requests and throughput).
[0101] A scheduler 104 decides scheduling priority in consideration
of queuing conditions obtained from the queuing condition detection
module 112 and the SLA contract conditions stored in the SLA
database 113. The scheduler 104 also manages the number of
processes 108, 109 allocated for each service provider to decide a
proper number of processes which meets the SLA contract.
[0102] The messages taken up by the scheduler 104 are sent to a
dynamic loader 105.
[0103] The dynamic loader 105 decides a service flow corresponding
to the current message by referring to the service flow definition
module 115.
[0104] An execution module manager 106 monitors the executing
module library 116 to detect an update if any. The dynamic loader
105 refers to the detection results to judge whether service nodes
needed for execution of a service corresponding to the current
message have been already loaded in the current process. If not
loaded (or old modules remain loaded), a new group of modules are
loaded. Then a service flow execution routine 107 executes the
service flow scheduled.
[0105] Hereinbelow, description will be made in detail about each
element constituting the system according to the embodiment of the
present invention.
[0106] 3. SLA Database
[0107] Referring to FIG. 3, an exemplary configuration of the SLA
database 113 will be described.
[0108] The SLA database 113 is stored on a disk in the form of a
table. A data center operating the transaction monitor 120
accumulates contract contents under contract with customers
(service providers) in the SLA database 113.
[0109] The first row in the table contains column heads of Service
Provider's Name 801, Service Name (Processing Type) 802, Class 803,
Upper Limit 804 and Priority 805.
[0110] The column below Service Provider's Name 801 lists names of
service providers as processing contract targets of the transaction
monitor. This column may contain any character string as long as it
is a unique name.
[0111] The column below Service Name 802 lists names of services
provided by the corresponding service providers through the
transaction monitor. The column below Class 803 represents types of
contracts with the respective service providers, where "B.E."
stands for "Best Effort" to indicate such a contract item that the
transaction should be scheduled as long as resources are available.
In this case, if the resources are crowded with other transactions,
the transaction might be kept waiting a long time. On the other
hand, "U.L." stands for "Upper Limit" to indicate a contract item
which decides on the upper limit of transaction waiting time.
[0112] The column below Upper Limit 804 represents upper limit
times under the "U.L." contract. If the corresponding service
provider has a contract for "B.E.", the column does not make
sense.
[0113] The column below Priority 805 represents priorities to
services under the "U.L." contract. If the resources are so crowded
that the "U.L." contract cannot be satisfied, scheduling of
services is carried out in order of precedence. It should be noted
that Priority 805 may be decided according to the contact with each
service provider or the data center side may independently assign
priorities to service providers as customers or to services.
[0114] FIG. 3 shows a basic structure of the SLA database. In
addition to the basic structure, the data center can independently
set other items, for example, such as priority according to
processing load on each service.
[0115] Thus, priority and upper limit (allowable waiting time) are
defined for each service (each customer, where processing=type of
service flow) in the SLA database (means for storing priority
conditions). These definitions are set and stored by an operator
through input means, not shown. The preprocessor 103 and the
scheduler 104 refers to the priority conditions stored in the SLA
database 113. The scheduler 104 (means of execution prioritization)
searches the SLA database 113 for a service provider's name and
service name on the basis of service identification information as
search criteria in a manner to be described later to read in the
priority conditions.
[0116] 4. Message Dictionary
[0117] Referring to FIGS. 4A to 4D and FIG. 6, an exemplary
configuration of the message dictionary 114 will be described. The
message dictionary 114 stores definitions of a message format for
each service provider and each service.
[0118] Each of messages the transaction monitor 120 receives is
composed of a fixed part (1001, 1002 in FIG. 6) and a variable part
(1003 in FIG. 6). The fixed part contains message fields unique to
the transaction monitor 120 while the variable part contains a
message field varied by each service provider and each service.
[0119] Corresponding to the message structure of FIG. 6, the
message dictionary 114 also contains a fixed part definition module
(FIG. 4A) and variable part definition modules (FIGS. 4B, 4C and
4D).
[0120] In the example of FIG. 4A, the fixed part definition module
has columns of Starting Byte (901), Length (902) and Type (903),
indicating that the service provider's name is stored in a 32-byte
field from zero byte, and the service name is stored in a 32-byte
field from the 32.sup.nd byte. The 64.sup.th byte and the following
bytes belong to the variable part.
[0121] The variable part definitions are made by combining a
variable-part index definition module (FIG. 4B) with variable-part
field definition modules (FIGS. 4C and 4D).
[0122] The variable-part index definition module is formed into a
table for use in searching indexes of the variable-part field
definition modules on the basis of the service provider's name 905
and the service name 906 (service identification information)
entered in the fields 1001, 1002 of the message fixed part. For
example, in FIG. 4B, the index for "service provider A" and
"service A1" is "1."
[0123] The variable-part field definition module (FIG. 4C) having
the same table index (="1") represents definitions related to
"service A1." Similarly, the index for "service A2" of "service
provider A" is "2." The variable-part field definition module (FIG.
4D) having the same table index represents definitions related to
"service A2."
[0124] Each table index sets fields of Starting Byte, Length and
Data Type. FIG. 4C shows that the account number is stored in a
four-byte field from the 64.sup.th byte, the time stamp is stored
in a 12-byte field from the 68.sup.th byte, and the withdrawal
amount is stored in an 8-byte field from the 80.sup.th byte. FIG.
4D also shows the same except that the 8-byte field from the
80.sup.th byte corresponds to the current balance.
[0125] Upon inputting a message to the transaction monitor 120, the
definition modules allow the transaction monitor 120 to judge, from
the fixed part 1001, 1002 of the message, which service provider
and which service the message belong to. Further, in the variable
part 1003 of the message, parameters of the service can be set.
[0126] 5. Service Flow Definition Module and Service Flow Execution
Routine
[0127] As shown in FIG. 5, the service flow execution routine 107
is formed by connecting individual processes on the basis of the
message entered. Combining plural processes (processing nodes),
each of which has its own purpose, makes it possible to realize a
complicated function.
[0128] FIG. 5A shows a service flow consisting of five processing
nodes 601 to 605, in which arrows indicate a flow of the message
between nodes.
[0129] The node 601 receives the message from a terminal via the
transaction monitor, and forwards the message to the processing
node 602. The processing node 602 refers to the message to perform
processing defined by the user while modifying the message if
required, and forwards the message to the downstream nodes 603, 604
and 605 accordingly.
[0130] The node 603 is a message conversion node that performs code
conversion of the message according to the coding format of a
service program on the output destination side (for example, it
performs conversion from EBCDIC code to ASCII code). The nodes 604
and 605 are output nodes from which the message is send out to
external service programs via the transaction monitor.
[0131] FIG. 5B shows information on the service flow definition
module 115.
[0132] Columns 620 to 624 represent definition conditions for the
nodes 601 to 605, respectively. The service name specifies a
service name to which each node belongs. The node name specifies
any node name in such a manner that the node name is determinately
defined in the flow. The node type selects and specifies an
appropriate one of the node types provided in the message broker
system from among the node types, such as input node, processing
node, conversion node and output node. The input source and the
output destination specify a node name as input source and output
destination to and from the node specified in the corresponding
"Node Name" column. For example, the node B 602 receives the
message from the node A 601, and output the message to the node C
603 and the node E 605. Further, the processing node and the
conversion node have individual processing contents specified.
[0133] The specification of the processing contents is made
possible by storing corresponding processing modules in the
bottommost "Module" columns of the definition conditions 620 to
624. Since the other nodes such as the input/output nodes perform
routine processing and use predetermined regular modules, their
processing names do not need specifying.
[0134] The service flow definition module 115 and the service flow
execution routine 107 allow the execution of the service flow on
the transaction monitor, which in turn makes it possible to
construct a message broker capable of providing more complicated
services by combining plural service programs.
[0135] 6. Executing Module Library
[0136] The executing module library 116 stores execution module
groups needed for executing each service node in the service flow.
Each execution module can be stored, for example, in the UNIX file
format. The file name is made correspondent with the module name
appearing in the service flow definition module, which makes it
possible to retrieve a corresponding execution module from the
service flow.
[0137] The execution module is created in such a format that it can
be dynamically loaded during execution, for example, in the UNIX
DLL (Dynamic Loading Library) format.
[0138] 7. Request Queue
[0139] The request queues 110, 111 are data structures for storing
messages input to the transaction monitor 120 in the order of
input.
[0140] The request queues 110, 111 is created exclusively for each
service provided by each service provider registered in the
transaction monitor 120. FIG. 7 shows the structure of each request
queue.
[0141] In FIG. 7, the request queue is constituted of a request
header 1114 to 1116 provided one for each queue, and plural request
structures 1101 to 1104 connected from the request header in a list
structure.
[0142] The request header contains fields of service information
1114, SLA information 1115, backward chain 1116, queuing top
pointer or start address 1117 and queuing information 1118.
[0143] The service information field 1114 is for storing a service
provider and service name allocated to the queue. The SLA
information field 1115 is for storing an SLA definition module
stored in the SLA database 113. The SLA definition module is
retrieved from the SLA database 113 on the basis of the service
provider and service name and stored in the request header.
[0144] The backward chain field 1116 is for storing pointers to
connect the request header with the other request headers in a list
structure in case of the presence of plural queues. FIG. 7 shows
such condition that plural request headers 1130 to 1132 are
connected using backward pointers.
[0145] The queuing top pointer or start address field 117 is for
storing a pointer or start address to the top request structure of
each queue (first created request structure in each queue). The
queuing information field 1118 is for storing request conditions
queued in each queue. Directions for use of the queuing information
118 will be described later.
[0146] Each request structure contains four fields 1110 to 1113.
The time stamp field 1110 indicates the time of creation of each
request. The forward and backward chain fields 1113 and 1114 store
pointers for linking request structures with one another to form
each queue. The message pointer field 1115 stores a pointer or
address to an area in which the message main body is stored.
[0147] Chains 1101 to 1104 show such condition that the request
structures form a queue in forward and backward chains. Message
storage areas 1120 to 1123 correspond to respective requests, and
pointed by each message pointer stored in the corresponding request
structure.
[0148] 8. Preprocessor
[0149] The preprocessor 103 compares a message input to the
transaction monitor with the contents of the message dictionary 114
to analyze which service provider and which service the message
belong to. As a result of the analysis, the message is stored in an
appropriate request queue 110 or 111.
[0150] FIG. 8 shows an example of an operation flow of the
preprocessor.
[0151] Upon activation, the preprocessor 103 reads information on
the message fixed part (1001, 1002 in FIG. 6) from the message
dictionary 114 (1201).
[0152] Then, the preprocessor 103 enters a loop 1202 to receive
messages from clients until the transaction monitor 120 finishes
providing services, and becomes a message input waiting state
(1203). After completion of providing all the services, the
preprocessor 103 may exit from the loop 1202 or perform interrupt
processing to break the loop.
[0153] The message input waiting state can be realized, for
example, by the UNIX accept system call. Upon receipt of a message,
the preprocessor 103 uses the message fixed-part information
already read in the step 1201 to extract service provider's name
and service name corresponding to the message (1204).
[0154] Next, the preprocessor 103 searches the request headers one
by one (1205) to retrieve a queue corresponding to the service
provider's name and service name obtained (1206) so as to resister
the message (electronic text) input to the queue. The registration
of the message can be carried out-such that a new request structure
(1110-1113 in FIG. 7) containing the message, the service
provider's name and the service name is created, and put at the
tail end of the queue structure (1101-1104 in FIG. 7) with pointer
operations.
[0155] 9. Queuing Condition Detection Module
[0156] The queuing condition detection module 112 monitors
conditions of the request queues 110, 111 not only to select
requests to be scheduled by the scheduler 104, but also to extract
information necessary to distribute appropriate resources to the
respective services.
[0157] The queuing condition detection module 112 is activated at
fixed intervals by means of the transaction monitor 120 or the
operating system on the server so as to perform predetermined
processing. Here, a sigalarm system call of the UNIX operating
system can be used to activate the queuing condition detection
module 112 at fixed intervals.
[0158] FIG. 9 shows an example of a processing flow executed each
time the queuing condition detection module 112 is activated.
[0159] For each request header (1301), a request structure to be
pointed from the request header are scaned (1302), and the number
of requests in the queue is counted up (1303). Simultaneously, the
oldest time stamp from among those of the request structures in the
queue is selected.
[0160] The number of request and the oldest time stamp are stored
in the queuing information field (1118 in FIG. 7) of the request
header.
[0161] 10. Scheduler
[0162] The scheduler 104 executes the scheduling of the requests on
the basis of the information extracted by the queuing condition
detection module 112.
[0163] The scheduling is so made that the requests with U.L. (upper
limit) contract in the SLA class are given higher priority than
those in the B.E. (best effort contract) class, and the requests in
the B.E. class are scheduled only when there is room in the
computer resources. In either class, the requests are scheduled
sequentially from that with the oldest time stamp.
[0164] FIG. 10 shows a specific example of a processing flow for
selecting requests to be scheduled.
[0165] First of all, the scheduler 104 initializes all temporary
variables (1401). In step 1401, "Tul" represents a temporary
variable for storing the time stamp of each request belonging to
the U.L. class service providers, "Tbe" represents a temporary
variable for storing the time stamp of each request in the B.E.
class, and "Pul" and "Pbe" are temporary variables for storing
pointers to the requests in the U.L. and B.E. classes,
respectively.
[0166] Next, for each header (1402) stored in the request header
lists (1130 to 1132 in FIG. 7), the scheduler 104 refers to the SLA
information (1115 in FIG. 7) in the header to judge whether the
header is in the U.L. or B.E. class (1403).
[0167] If the header is in the U.L. class, the scheduler 104
compares the minimum time stamp previously stored as the temporary
variable "Tul" with the oldest time stamp in the queue obtained
from the queuing information (1118 in FIG. 7) stored in the request
header (1404). If the time stamp concerned is older (smaller), the
scheduler 104 replaces the temporary variable "Tul" (1405) and
stores the pointer to the request header concerned as the temporary
variable "Pul".
[0168] On the other hand, if it is judged in the above-mentioned
judgment step 1403 that the header concerned belongs to the B.E.
class, the scheduler 104 uses the temporary variables "Tbe" and
"Pbe" to perform the same operations (1407 to 1409). As a result of
the above-mentioned processing flow, the oldest time stamp and its
associated request header can be obtained in both the U.L. and B.E.
classes.
[0169] Next, the scheduler 104 determine which class, the U.L. or
B.E. class, should be given preference on scheduling.
[0170] First, if either "Pul" or "Pbe" is Null, the scheduler 104
schedules the request not having Null (1410 to 1412).
[0171] If both are not Null, the scheduler 104 evaluates both
requests form the following equation 1):
Tul<((current time-upper limit)+e) 1)
[0172] In the equation 1), the current time is time during the
execution of the processing. The upper limit is the upper-limit
time (804 in FIG. 3) of the service concerned under SLA contract
defined in the SLA database 113, and is obtained by referring to
the SLA information in the request header (1115 in FIG. 7).
Further, the symbol "e" represents an offset value decided by an
operator of the transaction monitor 120.
[0173] The above-mentioned equation 1) is to check whether the
request with the oldest time stamp in the U.L. class exists in a
time slot (e) of the upper limit delay of the processing defined
under SLA contract. If the request exists, the scheduler 104 gives
a higher priority to the U.L. class to schedule the request in the
U.L. class (1414). On the other hand, if no request exists in the
time slot (e), since there is room to process the U.L. class, the
request with the oldest time stamp in the B.E. class is scheduled
(1415).
[0174] 11. Dynamic Loader
[0175] The dynamic loader 105 receives the request scheduling
results from the scheduler 104 to activate processes and load
execution modules.
[0176] The dynamic loader 105 contains therein process management
information for managing conditions of processes to be activated in
the service flow execution routine 107.
[0177] Referring to FIG. 11, an exemplary structure of the process
management information will be described.
[0178] The process management information is used to manage which
process corresponds to each service and which execution module is
loaded for the process.
[0179] A service structure has fields 1501 to 1503 one of which
stores its service name. Such configured service structures 1521,
1522 are linked as shown to create a service-specific list
structure (service chain).
[0180] Process structures 1531, 1532 are pointed by respective
process pointers 1502 from the service structures 1521, 1522,
respectively.
[0181] The process structures 1531, 1532 each have four fields of
process ID 1504, pointer to execution module 1503, flag 1504
indicative of whether the process is in use, and backward pointer
1505.
[0182] The process structures 1531, 1532 are linked as shown to
form the list structure (process chain). Further, execution module
structures 1541-1543 and 1551-1553 are pointed from the process
structures 1531 and 1532, respectively.
[0183] The execution module structures each store module name 1508,
backward pointer 1509 and counter information 1510 indicative of
the version of the execution module concerned. The execution module
structures 1541 to 1543 (or 1551 to 1553) are linked as shown to
form a list structure (module chain).
[0184] Referring next to FIG. 12, a processing flow of the dynamic
loader will be described.
[0185] First of all, the dynamic loader 105 traces the service
chain (1521, 1522 in FIG. 11) in the process management information
to check whether a service to be scheduled exists in the chain
(1602).
[0186] If such a service exists, since at least one process has
been already activated for processing the service, the dynamic
loader 105 traces the process chain (1531, 1532 in FIG. 11) to
search an unused process (1603, 1604).
[0187] If an unused process exists, the dynamic loader 105
shared-locks an execution module table 1700 in the execution module
manager 106 to trace the module chain (1541 to 1543 in FIG. 11)
constituting the process so as to check whether each module has
been changed or updated since the previous loading (1607,
1608).
[0188] The details of the execution module manager 106 and the
execution module management table 1700 will be described later. If
a change is detected, the module concerned is loaded (1609).
[0189] On the other hand, if no service to be scheduled exists in
the chain (1605), or if no unused process exists in the process
chain (1606), the dynamic loader 105 activates a new process to
load a necessary execution module or modules (1621).
[0190] In this processing, the dynamic loader 105 first activates
the process to register the ID in the process chain (1622). Then,
for each column of the service flow definition table in the service
flow definition module 115 (1623), the dynamic loader 105 judges
whether each module belongs to the service to which the dynamic
loader's attention is now directed (1624). If each module belongs
to the service, the dynamic loader 105 loads the module (1625). It
should be noted that the process ID may be a "pid" to be attached
in the UNIX operating system.
[0191] 12. Execution Module Manager
[0192] The execution module manager 106 manages addition, update
and deletion of execution modules in the execution module library
116. The execution module manager 106 has an execution module
condition table as a data structure for holding or storing
execution modules conditions.
[0193] FIG. 13 shows an example of the execution module condition
table.
[0194] Columns below heads 1701 to 1705 of the condition table 1700
correspond to respective execution modules stored in the execution
module library 116. For each execution module, execution module
name and update counter information (identifier) are stored.
[0195] The update counter has an integer indicative of the number
of updates of the execution module concerned. The update counter
stores "1" at the time of registration of a new module, and
increments the number by one each time the module is updated.
[0196] The execution module condition table 1700 is accompanied
with a lock field 1710. The field stores a lock state of the table,
taking three values N (unlocked), S (shared-locking) and E
(exclusive locking).
[0197] Referring next to FIG. 14, description will be made in
detail about a step (1608 in FIG. 12) in which the dynamic loader
105 detects update conditions of the execution module using the
condition table.
[0198] First of all, the dynamic loader 105 shared-locks the lock
field of the execution module condition table 1700 (1606 in FIG.
12) to obtain, from a corresponding module structure (e.g., 1541),
the name of the execution module to which the dynamic loader's
attention is directed in the loop 1607 (1801).
[0199] Next, the dynamic loader 105 looks up the execution module
condition table with the name (1802) to obtain a corresponding
update counter (1803). Further, the dynamic loader 105 compares the
counter value obtained with the value of a version counter (1510 in
FIG. 11) in the module structure (e.g., 1541) (1804).
[0200] If the value of the update counter is equivalent to that of
the version counter, the dynamic loader 105 determines that the
execution module has not been changed since the previous loading to
stop re-loading the execution module.
[0201] On the other hand, if the value of the update counter is
larger than that of the version counter, the dynamic loader 105
determines that the execution module has been changed since the
previous loading to re-load the execution module and substitute the
value of the update counter into the version counter.
[0202] Referring next to FIG. 15, description will be made below
about a processing flow of the execution module manager 106 upon
updating modules in the execution module library.
[0203] First of all, the execution module manager 106 exclusively
locks the lock field 1710 in the execution module condition table
1700 ("S").
[0204] Then, the execution module manager 106 obtains from the
transaction monitor 120 the name of execution module library to be
updated.
[0205] The name concerned can be obtained, for example, from
information input by an operator from an operation console (720 in
FIG. 2) of the transaction monitor. Then, the execution module
manager 106 searches the execution module condition table 1700 to
find a column having the same name as that of the module to be
updated (1904) so as to increment the update counter in the column
(1905). Finally, the execution module manager 106 releases the
execution module condition table 1700 from exclusive locking
("N")
[0206] 13. Operation
[0207] In the above-mentioned structure, the request queues 110,
111 are provided one for each service of each service provider
registered in the transaction monitor 120. In addition to the
operation of the request queues 110, 111, the preprocessor 103
sends an input message to an appropriate request queues 110 or 111
on the basis of the contents of the message dictionary 114. The
queuing condition detection module 112 monitors conditions of the
request queues 110, 1111 to select requests to be scheduled by the
scheduler 104. The Scheduler 104 controls the requests on the basis
of the information indicative of service priorities to plural
service providers (customers) stored in the SLA database 113
(contract information related to service levels). Therefore, one
transaction monitor 120 (or message broker) can be commonly used
for plural customers while allocating each request to the optimum
resource according to the predetermined priority or resource
conditions, which makes it possible to guarantee proper throughput
on any service.
[0208] The transaction processing system can be used in a data
center that performs contract outsourcing of plural service
providers' systems and centralized control of computer resources.
This makes possible real time processing of more on-line
transactions with less computer resources with maintaining the
throughput guaranteed under contract with the customers. Thus the
reliability and performance of the data center that integrally
processes business transactions for plural customers can be
improved.
[0209] Further, the dynamic loader 105 that implements necessary
processes for each of services provided by plural service providers
collectively loads updated modules before transaction processing,
the updated modules being judged by the execution module manager
106 that detects whether execution modules constituting each
process are updated or not. Such a system makes it possible to
change any service at any time when the transaction monitor 120 in
operation without the need to provide means for disabling the
routing of the execution modules or an auxiliary process group.
Such a system can construct a transaction monitor 120 or message
broker capable enhancing its flexibility and availability and
making it easy to add and change business logic of customers with
maintaining effective use of computer resources, which in turn
makes the system operation easy.
[0210] FIGS. 16 and 17 shows the second embodiment.
[0211] The first embodiment assumed a particular case where there
was in the service flow execution routine 107 a number of idling
processes enough for the scheduler to schedule all the
requests.
[0212] In contrast, this embodiment assumes a normal case where the
number of processes may not be secured due to limited computer
resources and some processes needs to be traded off between
services.
[0213] This embodiment is provided with a process manager 2001
instead of the scheduler 104. The other elements are the same as
those in the first embodiment.
[0214] The process manager 2001 is operative to control the dynamic
loader 105 by estimating the number of processes to be required for
the processing concerned from service conditions of the request
queues 110, 111 and the SLA contract.
[0215] After completion of a currently processed transaction, each
process enters request acceptable state so that the next request
can be extracted from a corresponding request queue 110 or 111 for
the next transaction processing.
[0216] A processing flow of the process manager 2001 will be
described based on FIG. 17.
[0217] Upon initiating the system, the process manager 2001 obtains
the SLA conditions (in FIG. 3) related to each service from the SLA
database 113. The process manager 2001 periodically monitors the
queue and process conditions when the system is in operation (2103)
to perform the following operations.
[0218] First of all, the process manager 2001 obtains the queuing
information related to each service from the queuing condition
detection module 112 (2102). The queuing information includes the
number of waiting requests and the oldest time stamp. As discussed
with respect to FIG. 9, the queuing information can be obtained by
referring to the queuing information field 1118 of the request
header extracted by the queuing condition detection module 112.
[0219] Next, for each service (2104), the process manager 2001
obtains, from the service flow execution routine, the transaction
starting time and finishing time and the number of processes
corresponding to the service to calculate throughput to the
transaction. In general, since plural processes correspond to one
service (108, 109 in FIG. 16), the total throughput to the service
concerned is determined by the sum of reciprocal numbers of time
periods required for the transactions processed by the respective
processes.
[0220] On the other hand, the process manager 2001 determines, from
the queuing information obtained, a difference between the previous
queuing length and the current queuing length (the number of
waiting requests in the queue) to calculate the frequency of
arrival of requests. The frequency of arrival of requests can be
calculated by dividing the difference in the queuing length by the
time interval.
[0221] Alternatively, the process manager 2001 may obtain a
difference between the start time and stop time of each transaction
to determine throughput to the transaction by multiplying the
reciprocal number of the difference by the number of processes to
be allocated for the service.
[0222] The total throughput thus obtained is compared with the
frequency of arrival of requests, which makes it possible to
estimate the level of satisfactory throughput to the service
concerned.
[0223] In other words, if the total throughput is larger in number
than the frequency of arrival of requests, the queuing length is
considered to be reduced with time. If it is smaller, the queuing
length is considered to increase with time. Here, the level of
satisfactory throughput is determined by dividing the total
throughput by the frequency of arrival of requests (2109).
[0224] After completion of determining the level of satisfactory
throughput to each service, the process manager 2001 changes the
number of processes for the service to control the processes so
that the optimum throughput will be distributed to each service.
Here, the process manager 2001 newly calculates the number of
processes needed to set the level of satisfactory throughput to one
or more in the order of priority decided according to the SLA
contract (2108). If the number of processes newly calculated is
larger than the number of process currently existed, the process
manager 2001 activates a number of processes corresponding to the
difference between the newly calculated number and the existing
number, and loads necessary execution modules through the dynamic
loader 105 with keeping the loaded execution modules waiting
(2112).
[0225] If the transaction monitor is limited in total number of
processes and a necessary number of processes cannot be all
activated, a number of processes are activated as many as possible
(2113). On the other hand, if there is room in the level of
satisfactory throughput, affordable processes are stopped to
release their system resources (2114).
[0226] Such a scheduling technique allows processes to be
distributed to services having higher priorities in terms of SLA
contract, which increase the probability of success in satisfying
each service contract. At the same time, if there is room in the
level of satisfactory throughput, affordable resources can also be
allocated to such services that their priorities are low.
[0227] In other words, even if a sufficient number of processes
cannot be secured due to limited computer resources, an appropriate
throughput can be secured according to each SLA contract, thus
controlling the computer resources and hence improving the system's
reliability.
[0228] It should be noted here that when the frequency of arrival
of requests largely varies, the operations shown in FIG. 17 may not
be enough to prevent frequent start and stop of processes. To
prevent excess variations in the number of processes, control can
be carried out by taking into account histories of processes such
as to prohibit the processes once activated from being stopped
during a fixed time period.
[0229] Further, in the case that many of high-priority requests are
input, the operations in FIG. 17 may keep low-priority requests
waiting a long time. In this case, the minimum number of processes
for each service has only to be determined beforehand so that the
number of processes can be increased or decreased in such a range
that the number of processes is never below the predetermined
number.
[0230] Another feature of the second embodiment is transaction
processing capable of providing one or more services and connecting
one or more clients to each service. This feature is implemented by
queuing means (110, 111) for storing processing requests from the
clients for services while assigning priorities to the requests for
each service, waiting condition obtaining means (queuing condition
detection module 112) for obtaining waiting conditions of
processing requests stored in the queuing means, and process
allocating means (process manager 2001) for allocating processing
processes of transactions to each service. In this configuration,
the process allocating means decides the allocation of processes to
each service by referring to the process request waiting conditions
obtained and throughput to each transaction.
[0231] To be more specific, a program for allocating processes is
carried out by comparing the frequency of arrival of processing
requests in a unit time, calculated from the processing request
waiting conditions, with the throughput to the transaction. If the
frequency of arrival of processing requests is larger than the
throughput to the transaction, the number of processes to be
allocated is increased. On the other hand, if the frequency of
arrival of processing requests is smaller than the throughput to
the transaction, the number of processes to be allocated is
reduced.
* * * * *