U.S. patent application number 13/365938 was filed with the patent office on 2013-02-07 for resource allocation in partial fault tolerant applications.
This patent application is currently assigned to International Business Machines Corporation. The applicant listed for this patent is Navendu Jain, Yoonho Park, Deepak S. Turaga, Chitra Venkatramani. Invention is credited to Navendu Jain, Yoonho Park, Deepak S. Turaga, Chitra Venkatramani.
Application Number | 20130036424 13/365938 |
Document ID | / |
Family ID | 40845620 |
Filed Date | 2013-02-07 |
United States Patent
Application |
20130036424 |
Kind Code |
A1 |
Jain; Navendu ; et
al. |
February 7, 2013 |
RESOURCE ALLOCATION IN PARTIAL FAULT TOLERANT APPLICATIONS
Abstract
A method for allocating a set of components of an application to
a set of resource groups includes the following steps performed by
a computer system. The set of resource groups is ordered based on
respective failure measures and resource capacities associated with
the resource groups. An importance value is assigned to each of the
components. The importance value is associated with an affect of
the component on an output of the application. The components are
assigned to the resource groups based on the importance value of
each component and the respective failure measures and resource
capacities associated with the resource groups. The components with
higher importance values are assigned to resource groups with lower
failure measures and higher resource capacities. The application
may be a partial fault tolerant (PFT) application that comprises
PFT application components. The resource groups may comprise a
heterogeneous set of resource groups (or clusters).
Inventors: |
Jain; Navendu; (Austin,
TX) ; Park; Yoonho; (Chappaqua, NY) ; Turaga;
Deepak S.; (Nanuet, NY) ; Venkatramani; Chitra;
(Roslyn Heights, NY) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Jain; Navendu
Park; Yoonho
Turaga; Deepak S.
Venkatramani; Chitra |
Austin
Chappaqua
Nanuet
Roslyn Heights |
TX
NY
NY
NY |
US
US
US
US |
|
|
Assignee: |
International Business Machines
Corporation
Armonk
NY
|
Family ID: |
40845620 |
Appl. No.: |
13/365938 |
Filed: |
February 3, 2012 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
11970841 |
Jan 8, 2008 |
8112758 |
|
|
13365938 |
|
|
|
|
Current U.S.
Class: |
718/104 |
Current CPC
Class: |
G06F 9/5005
20130101 |
Class at
Publication: |
718/104 |
International
Class: |
G06F 9/46 20060101
G06F009/46 |
Goverment Interests
[0002] This invention was made with Government support under
Contract No.: H98230-07-C-0383 awarded by the Department of
Defense. The Government has certain rights in this invention.
Claims
1. A method for allocating a set of one or more processing
components of an application to a set of one or more resource
groups, comprising the steps performed by a computer system of:
ordering the set of one or more resource groups based on respective
failure measures and resource capacities associated with the one or
more resource groups; assigning an importance value to each of the
one or more components, wherein the importance value is associated
with an affect of the component on an output of the application;
and assigning the one or more components to the one or more
resource groups based on the importance value of each component and
the respective failure measures and resource capacities associated
with the one or more resource groups, wherein components with
higher importance values are assigned to resource groups with lower
failure measures and higher resource capacities.
2. The method of claim 1, wherein the application is a partial
fault tolerant (PFT) application that comprises a set of one or
more PFT application components.
3. The method of claim 1, wherein the set of one or more resource
groups comprise a heterogeneous set of resource groups.
4. The method of claim 1, wherein the ordering step comprises
sorting the one or more resource groups in a decreasing order based
on a ratio of a respective resource capacity of each of the one or
more resource groups to a failure probability of each of the one or
more resource groups.
5. The method of claim 1, wherein the ordering step comprises
sorting the one or more resource groups in a decreasing order based
on a product of a respective resource capacity of each of the one
or more resource groups and an availability measure of each of the
one or more resource groups.
6. The method of claim 5, wherein the availability measure for a
given resource group is computed as one minus a failure probability
of the given resource group.
7. The method of claim 1, wherein the importance value assigned to
a given component is based on a contribution that the given
component makes to the application output.
8. The method of claim 1, wherein the importance value assigned to
a given component is based on a loss incurred in the application
output value if the resource hosting the given component fails.
9. The method of claim 1, wherein the step of assigning the one or
more components to the one or more resource groups is also based on
one or more specified constraints on the one or more
components.
10. The method of claim 1, wherein an order for assigning
components is determined based on a data flow graph associated with
the application such that a single resource group failure affects
the minimal number of paths from a source to a sink in the data
flow graph.
11. The method of claim 1, wherein the step of assigning the one or
more components to the one or more resource groups is performed
responsive to a failure of at least one of the resources, making
unavailable at least one of the components assigned thereto.
12. The method of claim 1, wherein the effect of a given component
on the output of the application comprises an effect of the given
component on an output quality of the application.
13. The method of claim 12, wherein the effect of a given component
on the application output quality is based on the given component
being in one or more paths of a data flow graph associated with the
application.
14. The method of claim 1, wherein the step of assigning the one or
more components to the one or more resource groups comprises
defining, within a data flow graph associated with the application,
a connected sub-graph of components assigned to a given resource
group.
15. An article of manufacture for allocating a set of one or more
components of an application to a set of one or more resource
groups, the article comprising a non-transitory computer readable
storage medium containing one or more programs, which when executed
by a computer implement the steps of claim 1.
16. Apparatus for allocating a set of one or more components of an
application to a set of one or more resource groups, comprising: a
memory; and at least one processor coupled to the memory and
operative to perform the steps of: ordering the set of one or more
resource groups based on respective failure measures and resource
capacities associated with the one or more resource groups;
assigning an importance value to each of the one or more
components, wherein the importance value is associated with an
effect of the component on an output of the application; and
assigning the one or more components to the one or more resource
groups based on the importance value of each component and the
respective failure measures and resource capacities associated with
the one or more resource groups, wherein components with higher
importance values are assigned to resource groups with lower
failure measures and higher resource capacities.
17. The apparatus of claim 16, wherein the application is a partial
fault tolerant (PFT) application that comprises a set of one or
more PFT application components.
18. The apparatus of claim 16, wherein the ordering step comprises
sorting the one or more resource groups in a decreasing order based
on a ratio of a respective resource capacity of each of the one or
more resource groups to a failure probability of each of the one or
more resource groups.
19. The apparatus of claim 16, wherein the ordering step comprises
sorting the one or more resource groups in a decreasing order based
on a product of a respective resource capacity of each of the one
or more resource groups and an availability measure of each of the
one or more resource groups.
20. The apparatus of claim 16, wherein the importance value
assigned to a given component is based on a contribution that the
given component makes to the application output.
21. The apparatus of claim 16, wherein the importance value
assigned to a given component is based on a loss incurred in the
application output value if the resource hosting the given
component fails.
Description
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application is a continuation of U.S. application Ser.
No. 11/970,841, filed on Jan. 8, 2008, the disclosure of which is
incorporated herein by reference.
FIELD OF THE INVENTION
[0003] The present invention generally relates to distributed data
processing systems and, more particularly, to techniques for
allocating computing resources to partial fault tolerant
applications in such distributed data processing systems.
BACKGROUND OF THE INVENTION
[0004] Distributed data processing systems need to be highly
available and robust to failures. Traditional approaches to
fault-tolerance employ techniques such as replication or
check-pointing to address the availability requirements. However,
these approaches introduce well-known tradeoffs between cost and
availability. For example, a replicated service may incur
significant overheads to provide strict consistency requirements.
Further, the monetary cost of implementing highly available
services can double for just a fraction of percentage of
availability, and under correlated failures, even additional
replicas result in a strong diminishing return in availability
improvement for many replication schemes. Similarly, the overheads
of check-pointing can limit its benefits.
[0005] Many distributed data processing systems (often operating
under limited computing resources) have the property that they can
continue operating and producing useful output even in the presence
of application component failures, though the output quality may be
of a reduced value. We refer to these applications herein as
Partial Fault Tolerant (PFT) applications. In contrast to
applications that require the availability of all components to
operate correctly, PFT applications provide a "graceful
degradation" in performance as the number of failures increases.
For example, aggregation systems such as MapReduce (see, e.g., J.
Dean et al., "MapReduce: Simplified Data Processing on Large
Clusters," OSDI, 2004) based Sawzall (see, e.g., R. Pike et al.,
"Interpreting the Data: Parallel Analysis with Sawzall," Scientific
Programming Journal, Special Issue on Grids and Worldwide Computing
Programming Models and Infrastructure, 2005), SDIMS (see, e.g., P.
Yalagandula et al., "A Scalable Distributed Information Management
System," SIGCOMM, 2004), and PIER (see, e.g., R. Huebsch et al.,
"Querying the Internet with Pier," VLDB, 2003) are likely to be
able to tolerate some missing objects while processing a query
(e.g., AVG, JOIN, etc.) on a distributed database. Similarly, data
mining application such as WTTW (see, e .g., Verscheure et al.,
"Finding `Who is Talking to Whom` in VoIP Networks Via Progressive
Stream Clustering," ICDM, 2006) and FAB (see, e.g., Turaga et al.,
"Online FDC Control Limit Tuning with Yield Prediction Using
Incremental Decision Tree Learning," Sematech AEC/APC Symposium
XIX, 2007) can still classify data objects under failures, though
with less confidence. Further, for many stream processing
applications with stringent temporal requirements (see, e.g., D. J.
Abadi et al., "The Design of the Borealis Stream Processing
Engine," CIDR, 2005), it is more important to produce partial
results within a given time bound than full results delivered late.
Finally, mission-critical applications deploy multiple sensors at
different physical locations such that at least some of them should
trigger an alert during failures or when operating conditions are
violated (e.g., fire, medical emergencies, etc.).
[0006] However, none of the above fault-tolerance approaches
adequately address (in terms of minimizing cost and maximizing
availability) the assignment of PFT application components or, more
generally, the allocation of computing resources in a distributed
computing system, where the computing resources have certain
failure characteristic and may be heterogeneous in nature.
SUMMARY OF THE INVENTION
[0007] Principles of the invention provide new techniques for
assignment of PFT application components or, more generally, the
allocation of computing resources in a distributed computing
system.
[0008] For example, in one aspect of the invention, a method for
allocating a set of one or more processing components of an
application to a set of one or more resource groups comprises the
following steps performed by a computer system. The set of one or
more resource groups is ordered based on respective failure
measures and resource capacities associated with the one or more
resource groups. An importance value is assigned to each of the one
or more processing components, wherein the importance value is
associated with an effect of the processing component on the
application output. The one or more processing components are
assigned to the one or more resource groups based on the importance
value of each processing component and the respective failure
measures and resource capacities associated with the one or more
resource groups, wherein processing components with higher
importance values are assigned to resource groups with lower
failure measures and higher resource capacities.
[0009] The application may be a partial fault tolerant (PFT)
application that comprises a set of one or more PFT application
components. The set of one or more resource groups may comprise a
heterogeneous set of resource groups (or clusters of machines).
[0010] The ordering step may comprise sorting the one or more
resource groups in a decreasing order. The step of sorting may be
based on a ratio of a respective resource capacity of each of the
one or more resource groups to a failure probability of each of the
one or more resource groups. Alternatively, the step of sorting may
be based on a product of a respective resource capacity of each of
the one or more resource groups and an availability measure of each
of the one or more resource groups. The availability measure for a
given resource group may be computed as 1--failure probability of
the given resource group.
[0011] An importance value may be based on a contribution that the
processing component makes to the application output.
Alternatively, an importance value may be based on a loss incurred
in the application output value if the resource hosting the given
processing component fails.
[0012] The allocating step may also be based on one or more
specified constraints on the one or more components.
[0013] The allocating step may determine an order for assigning
components based on a data flow graph associated with the
application to a set of resource groups, such that a single
resource group failure affects a minimal number of paths from a
source (where computation on a data item is initiated) to a sink
(where the final output is produced) in the data flow graph.
[0014] The allocating step may be performed after a failure of at
least one of the components or resource groups (thus, it may also
be considered a run-time reallocation).
[0015] These and other objects, features, and advantages of the
present invention will become apparent from the following detailed
description of illustrative embodiments thereof, which is to be
read in connection with the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0016] FIG. 1 illustrates a data aggregation system, according to
one embodiment of the invention.
[0017] FIG. 2 illustrates three possible allocations of three
processing components to two resource groups (clusters) for the
data aggregation system in FIG. 1.
[0018] FIGS. 3A and 3B illustrate a methodology for allocating
components of a PFT application running on distributed data
processing systems, in accordance with one embodiment of the
invention.
[0019] FIG. 4 illustrates a computing system in which methodologies
of the invention may be implemented, according to one embodiment of
the invention.
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0020] Illustrative principles of the invention address a key
problem of how to assign
[0021] PFT application components to a distributed computing system
comprising a set of heterogeneous resource groups under a
correlated failure model (also referred to herein as "clusters")
with different resource capacities and availabilities.
Specifically, a method for placement of processing components for
PFT applications is provided that prevents, delays, or minimizes
the "loss" in the expected application output value under failures
before a full recovery from failures takes effect.
[0022] By way of example only, an application component may be
defined as a set of software modules which perform various
operations on input data elements in order to generate output data
elements. Examples of input data elements include packets of audio
data, email data, computer generated events, network data packets,
or readings from sensors, such as environmental, medical or process
sensors. Examples of transformations conducted by individual
application components include parsing the header of a IP packet,
aggregating audio samples into an audio segment or performing
speech detection on an audio segment, sampling sensor readings,
averaging or joining the readings over a time window of samples,
applying spatial, temporal, or frequency filters to extract
specific signatures over the audio or video segments, etc. The
application components are composed into an application represented
as a data-flow graph. A large number of such applications that can
tolerate partial failures are PFT applications.
[0023] The method determines the assignment of PFT application
components to clusters such that the loss in the output value of
the PFT applications is minimized under failures.
[0024] The method incorporates the following in computing the
resource allocation: (i) a mathematical model of cluster failures
where each cluster is assigned a failure probability under a
correlated failure model, and where individual cluster failures are
considered independent; (ii) the resource capacities of clusters;
and (iii) the availability and the placement constraints provided
by the applications.
[0025] The component allocation method includes the following
steps.
[0026] 1. First, the computing clusters are ordered--sorted in a
decreasing order--based on the ratio of their resource capacity to
the failure probability. Alternatively, the ordering may be done
based on the product of resource capacity and (1--failure
probability) (also referred to herein as availability).
[0027] 2. Second, each application component is assigned a relative
"importance value" (scalar value) defined as its contribution to
the application output. Alternately, this importance value is the
"loss" incurred in the application's total output value if the
resource hosting that component fails.
[0028] 3. Third, the component allocation method uses both (a) the
importance metric to rank application components and (b) the sorted
order of clusters so that highly important components get assigned
to highly reliable computing clusters with high resource
capacities.
[0029] The method may also include the step of allocating
application components based on their specified constraints on
resources (such as the need to be allocated to a cell blade or to a
secure tamper-resistant node, etc.), while still addressing the
goal of minimizing the loss in the application output value under
failures.
[0030] The method determines an order for assigning components
based on the application data flow graph such that a single cluster
failure affects the minimal number of paths from a source to a
sink.
[0031] The method aims to minimize the total weighted "loss" in the
expected application output value for a plurality of applications
when these applications execute on and share access to the same set
of computing clusters. Further, the method may also include factors
such as processing component reuse and input data reuse across a
plurality of applications, relative priorities of applications in
terms of ordering their expected output value, fault-tolerant
characteristics of individual applications, and delay constraints
on output response by an application, etc.
[0032] The method is also applied when a failure occurs in a PFT
application, to reallocate the failed components to the available
resource clusters.
[0033] Advantageously, the inventive method provides for component
placement, wherein both resource capacities and failure
probabilities are used to assign application components to
computing clusters. Prior work (see U.S. Patent Application Ser.
No. 11/735,026 (Attorney Docket No. YOR920060857US1), "System and
Method for Dependent-Failure Aware Allocation of Distributed
Data-Processing Systems," filed Apr. 13, 2007, the disclosure of
which is incorporated by reference herein) only uses resource
capacities but not failure probabilities. As a result, the
technique used in prior work might allocate all application
components to the cluster with the largest capacity but having the
smallest availability, thereby significantly reducing the
availability of the application hosted on the distributed data
processing system.
[0034] By way of further advantage, components are allocated in
decreasing importance to clusters by defining a connected sub-graph
comprising components that are all co-located on the same cluster.
This allocation has the advantage of limiting the effect of a
cluster's failure to the minimal number of paths from a source to a
sink. Prior work assigns components to the same cluster that does
not necessarily form a connected sub-graph. Therefore, a single
cluster failure can affect many more paths in the prior work's
technique, which the above method for assigning processing
components in this invention addresses.
[0035] Still further, the inventive method is applied during
failure recovery. When a subset of the application components has
failed, this method can be applied to restore the failed components
to the available resources, thereby improving the application
output value.
[0036] While certain illustrative embodiments of the invention will
be described herein from the perspective of data stream
applications, it is to be understood that the principles of the
invention are not limited to use with any particular application or
any data processing system. Rather, principles of the invention are
more generally applicable to any application and any data
processing system in which it would be desirable to minimize the
effect of failures on the application output quality.
[0037] Assuming a distributed data processing system model, the
problem can be precisely stated as follows. Given a distributed
computing system comprising n clusters (T.sub.1, T.sub.2, . . . ,
T.sub.n) each with a resource capacity c.sub.i and a failure
probability p.sub.i (i ranges from [1, n]), and a PFT application
made up of m components (C.sub.1,C.sub.2, . . . , C.sub.m) each of
which may execute on any cluster, allocate each of the m modules to
one of the n clusters such that the loss in expected application
output value is minimized under failures subject to the constraints
imposed by the application data flow graph, the resource
capacities, and the failure probabilities.
[0038] Thus, to overcome the above-mentioned drawback in
distributed data processing systems (i.e., in the event of a
failure-oblivious allocation of application components to computing
clusters, even a single cluster failure can have a significant
impact on the application's output quality if its highly important
components were placed on that cluster), principles of the
invention employ a "failure aware" design concept. Such a failure
aware design concept provides the differentiation between clusters
that are highly available and clusters that are most likely to
fail, and uses this information to make assignment decisions of
processing components to resource clusters.
[0039] FIG. 1 shows a data aggregation system according to one
embodiment of the invention. As shown, the illustrative data
aggregation system includes a plurality of components (11), wherein
each component 11-2 and 11-3 receives the data inputs for
aggregation. The components forward the inputs (k.sub.p and
k.sub.q) to the component 11-1 that computes the aggregate result;
SUM in this case.
[0040] It is to be appreciated that such components may be
logically allocated portions of processing resources (virtual
machines) within one computing system, such as a mainframe
computer. Alternatively, they could be allocated one or more types
of computing devices, e.g., server, personal computer, laptop
computer, handheld computing devices, etc. However, principles of
the invention are not limited to any particular type of computing
device or computing architecture. While the illustrative embodiment
shows only three nodes, it is to be appreciated that the system can
include more than three nodes.
[0041] FIG. 2 illustrates three possible component allocations of
three components to two clusters for the data aggregation system in
FIG. 1: (a) assign root component 11-1 to one cluster (black shaded
cluster or "cluster 1") and components 11-2 and 11-3 to another
cluster (gray shaded cluster or "cluster 2"), (b) assign 11-1 and
11-3 to the gray cluster and 11-2 to the black cluster, and (c)
assign all 11-1, 11-2, and 11-3 to the gray cluster.
[0042] Note that allocation (b) is better than allocation (a)
because if the black cluster fails, then the application output for
allocation (a) goes to 0. On the other hand, under allocation (b),
the system could still process data flowing from 11-3 to 11-1. If
the gray cluster fails, both allocations give no output. A careful
calculation shows that the best allocation, however, is (c) that
keeps all components on the same cluster. The main intuition behind
this is that only one cluster failure scenario affects allocation
(c), while two cluster failures scenarios can hinder allocations
(a) and (b).
[0043] There are several important observations from this example.
First, we observe that it is preferable to allocate as many
components as possible to the same cluster (subject to cluster
resource constraints) to minimize the loss in the expected output
value under failures. Second, we observe that it is preferable to
assign components on independent paths to different clusters to
avoid dependent failures. Finally, for heterogeneous clusters with
different failure probabilities, we observe that it is preferable
to assign "highly important" components to clusters with the lowest
failure probabilities. We use these observations in designing a
component placement algorithm to be described below.
[0044] These observations suggest three guiding principles: (1)
components of higher importance should be placed on clusters with
highest capacities and lowest failure probabilities; (2) all
components lying on a path from a source to the sink should be
co-located on the same cluster (if possible), i.e., minimize the
total number of clusters on all paths; and (3) assign components on
independent paths to different clusters to avoid dependent
failures.
[0045] The method of component allocation defines a connected
sub-graph of processing components that are all allocated to the
same resource cluster. The practical advantage of this method is to
have minimal effect of a single cluster failure on the number of
affected paths.
[0046] FIGS. 3A and 3B illustrate a flow diagram showing a method
for allocating components of PFT application running on a
distributed data processing systems in accordance with one
embodiment of the invention.
[0047] In general, the steps of FIG. 3 correspond to the following
pseudo-code which describes a fault-aware component placement
algorithm. Thus, reference will be made below to the steps of FIG.
3 that correspond to the pseudo-code.
[0048] Algorithm 300 starts (301) by inputting (302) a set C of all
PFT application components, a set T of all clusters, and the
application data flow graph G(C, E). The algorithm proceeds as
follows:
[0049] 1: Calculate the importance I(C) for components C={C.sub.1,
C.sub.2, . . . , C.sub.m} (303).
[0050] 2: Rank the clusters T.sub.1, T.sub.2, . . . , T.sub.n
sorted (decreasing) on c.sub.j/p.sub.j (j ranges from [1, n])
(303).
[0051] 3: j:=1 (303)
[0052] 4: while set C is not empty do (304)
[0053] 5: Select the highest importance component C.sub.i from C
(305)
[0054] 6: while T.sub.j has spare capacity do (306)
[0055] 7: Assign C.sub.i to T.sub.j; remove C.sub.i from set C;
initialize set SG to {C.sub.i} (307 and 308)
[0056] 8: Select highest importance C.sub.k from C such that
C.sub.k is connected to SG by an edge in E (as described below)
(309)
[0057] 9: If C.sub.k satisfying (8:) AND T.sub.j has spare capacity
then (310)
[0058] 10: Assign C.sub.k to T.sub.j; remove C.sub.k from set C;
add {C.sub.k} to SG (311 and 312)
[0059] 11: else {no such C.sub.k exists OR T.sub.j has no spare
capacity}
[0060] 12: break;
[0061] 13: end if
[0062] 14: end while
[0063] 15: if T.sub.j has no spare capacity then (306)
[0064] 16: j:=j+1 (313)
[0065] 17: end if
[0066] 18: end while
[0067] 19: stop (314)
[0068] Thus, in more general terms, given an application data flow
graph G(V, E), the method for component assignment includes the
following step: allocate components in decreasing importance to
clusters ranked by c.sub.j/p.sub.j (j ranges from [1, n]). The
method may further define a connected sub graph SG of components
that are co-located on the same cluster (say T) as follows: at each
step, assign the highest importance C.sub.k if: (1) T has spare
capacity; and (2) C.sub.k is connected to SG by an edge in E, i.e.,
there is an edge from C.sub.k to C.sub.p and C.sub.p belongs to the
sub-graph SG.
[0069] The method for component assignment may perform the step of
allocating components in decreasing importance to clusters ranked
by c.sub.j * (1-p.sub.j) (j ranges from [1, n]) where 1-p.sub.j is
also termed as availability of a cluster.
[0070] Further, ties between clusters having an equal ratio of
c.sub.j/p.sub.j or c.sub.j * (1-p.sub.j) can either be arbitrarily
broken, or based on comparing p.sub.j values against a threshold
and selecting the cluster with the smaller p.sub.j value, or based
on comparing c.sub.j values against a threshold and selecting the
cluster with the higher c.sub.j value, or based on selecting the
cluster with the smaller p.sub.j value if both the clusters satisfy
a minimum threshold of c.sub.j, or based on selecting the cluster
with the higher c.sub.j value if both the clusters satisfy a
maximum threshold of p.sub.j, or any combination of these schemes
and other techniques.
[0071] Embodiments of the present invention can take the form of an
entirely hardware embodiment, an entirely software embodiment or an
embodiment including both hardware and software elements. In a
preferred embodiment, the present invention is implemented in
software, which includes but is not limited to firmware, resident
software, microcode, etc.
[0072] Furthermore, the invention can take the form of a computer
program product accessible from a computer-usable or
computer-readable medium providing program code for use by or in
connection with a computer or any instruction execution system. For
the purposes of this description, a computer-usable or computer
readable medium can be any apparatus that may include, store,
communicate, propagate, or transport the program for use by or in
connection with the instruction execution system, apparatus, or
device. The medium can be an electronic, magnetic, optical,
electromagnetic, infrared, or semiconductor system (or apparatus or
device) or a propagation medium. Examples of a computer-readable
storage medium include a semiconductor or solid state memory,
magnetic tape, a removable computer diskette, a random access
memory (RAM), a read-only memory (ROM), a rigid magnetic disk and
an optical disk. Current examples of optical disks include compact
disk--read only memory (CD-ROM), compact disk--read/write (CD-R/W)
and DVD.
[0073] A data processing system suitable for storing and/or
executing program code such as the computing system 400 shown in
FIG. 4 may include at least one processor 402 coupled directly or
indirectly to memory element(s) 404 through a system bus 410. The
memory elements can include local memory employed during actual
execution of the program code, bulk storage, and cache memories
which provide temporary storage of at least some program code to
reduce the number of times code is retrieved from bulk storage
during execution. Input/output or I/O device(s) 406 (including but
not limited to keyboards, displays, pointing devices, etc.) may be
coupled to the system either directly or through intervening I/O
controllers.
[0074] Network adapter(s) 408 may be included to enable the data
processing system to become coupled to other data processing
systems or remote printers or storage devices through intervening
private or public networks. Modems, cable modem, and Ethernet cards
are just a few of the currently available types of network
adapters. It is to be appreciated that the term "processor" as used
herein is intended to include any processing device, such as, for
example, one that includes a CPU (central processing unit) and/or
other processing circuitry. It is also to be understood that the
term "processor" may refer to more than one processing device and
that various elements associated with a processing device may be
shared by other processing devices. Thus, software components
including instructions or code for performing the methodologies
described herein may be stored in one or more of the associated
memory devices (e.g., ROM, fixed or removable memory) and, when
ready to be utilized, loaded in part or in whole (e.g., into RAM)
and executed by a CPU.
[0075] Although illustrative embodiments of the present invention
have been described herein with reference to the accompanying
drawings, it is to be understood that the invention is not limited
to those precise embodiments, and that various other changes and
modifications may be made by one skilled in the art without
departing from the scope or spirit of the invention.
* * * * *