U.S. patent application number 12/765637 was filed with the patent office on 2011-10-27 for network data congestion management probe system.
This patent application is currently assigned to International Business Machines Corporation. Invention is credited to Daniel Crisan, Casimer M. DeCusatis, Mitch Gusat, Cyriel J.A. Minkenberg.
Application Number | 20110261696 12/765637 |
Document ID | / |
Family ID | 44118916 |
Filed Date | 2011-10-27 |
United States Patent
Application |
20110261696 |
Kind Code |
A1 |
Crisan; Daniel ; et
al. |
October 27, 2011 |
NETWORK DATA CONGESTION MANAGEMENT PROBE SYSTEM
Abstract
A system to investigate congestion in a computer network may
include network devices to route data packets throughout the
network. The system may also include a source node that sends a
probe packet to the network devices to gather information about the
traffic queues at each network device that receives the probe
packet. The system may further include a routing table at each
examined network device that is based upon the gathered information
for each respective traffic queue.
Inventors: |
Crisan; Daniel; (Zurich,
CH) ; DeCusatis; Casimer M.; (Poughkeepsie, NY)
; Gusat; Mitch; (Langnau, CH) ; Minkenberg; Cyriel
J.A.; (Gutenswil, CH) |
Assignee: |
International Business Machines
Corporation
Armonk
NY
|
Family ID: |
44118916 |
Appl. No.: |
12/765637 |
Filed: |
April 22, 2010 |
Current U.S.
Class: |
370/235 ;
370/392 |
Current CPC
Class: |
H04L 45/127 20130101;
H04L 45/00 20130101; H04L 47/11 20130101; H04L 45/125 20130101;
H04L 45/66 20130101; H04L 45/26 20130101 |
Class at
Publication: |
370/235 ;
370/392 |
International
Class: |
H04L 12/56 20060101
H04L012/56 |
Claims
1. A system comprising: network devices to route data packets
throughout a network; a source node that sends a probe packet to
the network devices to gather information about the traffic queues
at each network device that receives the probe packet; and a
routing table at each network device that receives the probe
packet, and the routing table is based upon the gathered
information for each respective traffic queue.
2. The system of claim 1, wherein the network devices are members
of at least one virtual local area network.
3. The system of claim 1, wherein the probe packets include at
least one of a layer 2 flag and sequence/flow/source node IDs.
4. The system of claim 1, wherein each network device can ignore
the probe packet if it is busy.
5. The system of claim 1, wherein at least one of the network
devices provide its extended queue status to the source node in
response to receiving the probe packet.
6. The system of claim 1, wherein a network device provides its
extended queue status to other network devices in response to
receiving the probe packet.
7. The system of claim 5, wherein the extended queue status
includes at least one of the number of pings from any flow ID
received since the last queue change, the number of packets
forwarded since the last queue change, and pointers to a complete
network device core dump.
8. The system of claim 5, wherein if an extended queue status
exceeds a threshold level, the source node updates the routing
table to rebalance traffic loads.
9. The system of claim 1, wherein the probe packet is sent in
response to the source node receiving a threshold number of
congestion notification messages in a given time interval.
10. A method comprising: sending a probe packet to network devices
from a source node to gather information about the traffic queues
at each network device that is examined by the probe packet; and
basing a routing table at each network device that receives the
probe packet on the gathered information for respective each
traffic queue.
11. The method of claim 10, further comprising organizing the
network devices into a virtual local area network.
12. The method of claim 10, further comprising structuring the
probe packets to include at least one of a layer 2 flag and
sequence/flow/source node IDs.
13. The method of claim 10, further comprising sending an extended
queue status of at least one of the network devices to the source
node in response to receiving the probe packet.
14. The method of claim 10, further comprising providing an
extended queue status of a network device to other network devices
in response to the network device receiving the probe packet.
15. The method of claim 13, further comprising including at least
one of the number of pings from any flow ID received since the last
queue change, the number of packets forwarded since the last queue
change, and pointers to a complete network device core dump as part
of the extended queue status.
16. The method of claim 13, further comprising updating the routing
table to rebalance traffic loads via the source node if the
extended queue status exceeds a threshold level.
17. A computer program product embodied in a tangible media
comprising: computer readable program codes coupled to the tangible
media to investigate congestion in a computer network, the computer
readable program codes configured to cause the program to: send a
probe packet to network devices from a source node to gather
information about the traffic queues at each network device that is
examined by the probe packet; and base a routing table at each
network device that receives the probe packet on the gathered
information for respective each traffic queue.
18. The computer program product of claim 17, further comprising
program code configured to at least one of: organize the network
devices into a virtual local area network; and structure the probe
packets to include at least one of a layer 2 flag and
sequence/flow/source node IDs.
19. The computer program product of claim 17, further comprising
program code configured to: send an extended queue status of at
least one of the network devices to the source node in response to
receiving the probe packet.
20. The computer program product of claim 19, further comprising
program code configured to at least one of: provide an extended
queue status of a network device to other network devices in
response to the network device receiving the probe packet; include
at least one of the number of pings from any flow ID received since
the last queue change, the number of packets forwarded since the
last queue change, and pointers to a complete network device core
dump as part of the extended queue status; and update the routing
table to rebalance traffic loads via the source node if the
extended queue status exceeds a threshold level.
Description
BACKGROUND OF THE INVENTION
[0001] 1. Field of the Invention
[0002] The invention relates to the field of computer systems, and,
more particularly, to address data congestion and management of
such.
[0003] 2. Description of Background
[0004] Generally, conventional Ethernet fabrics are dynamically
routed. In other words, packets are directed from one switch node
to the next, hop by hop, through the network. Examples of protocols
used include Converged Enhanced Ethernet (CEE), Fibre Channel over
Converged Enhanced Ethernet (FCoCEE), and Data Center Bridging
(DCB), as well as proprietary routing schemes.
SUMMARY OF THE INVENTION
[0005] According to one embodiment of the invention, a system to
investigate congestion in a computer network may include network
devices to route data packets throughout the network. The system
may also include a source node that sends a probe packet to the
network devices to gather information about the traffic queues at
each network device that receives the probe packet. The system may
further include a routing table at each network device that
receives the probe packet, and the routing table is based upon the
gathered information for each respective traffic queue.
[0006] The network devices may be members of at least one virtual
local area network. The probe packets may include a layer 2 flag
and/or sequence/flow/source node IDs.
[0007] Each network device may ignore the probe packet if it is
busy. At least one of the network devices may provide its extended
queue status to the source node in response to receiving the probe
packet.
[0008] A network device may provide its extended queue status to
other network devices in response to receiving the probe packet.
The extended queue status includes the number of pings from any
flow ID received since the last queue change, the number of packets
forwarded since the last queue change, and/or pointers to a
complete network device core dump.
[0009] If an extended queue status exceeds a threshold level, the
source node updates the routing table to rebalance traffic loads.
The probe packet is sent in response to the source node receiving a
threshold number of congestion notification messages in a given
time interval.
[0010] Another aspect of the invention is a method to investigate
congestion in a computer network that may include sending a probe
packet to network devices from a source node to gather information
about the traffic queues at each network device that is examined by
the probe packet. The method may also include basing a routing
table at each network device that receives the probe packet on the
gathered information for respective each traffic queue.
[0011] The method may further include organizing the network
devices into a virtual local area network. The method may
additionally include structuring the probe packets to include at
least one of a layer 2 flag and sequence/flow/source node IDs.
[0012] The method may further include sending an extended queue
status of at least one of the network devices to the source node in
response to receiving the probe packet. The method may additionally
include providing the extended queue status of a network device to
other network devices in response to the network device receiving
the probe packet.
[0013] The method may further comprise including at least one of
the number of pings from any flow ID received since the last queue
change, the number of packets forwarded since the last queue
change, and pointers to a complete network device core dump as part
of the extended queue status. The method may additionally include
updating the routing table to rebalance traffic loads via the
source node if the extended queue status exceeds a threshold
level.
[0014] Another aspect of the invention is a computer readable
program codes coupled to tangible media to investigate congestion
in a computer network. The computer readable program codes may be
configured to cause the program to send a probe packet to network
devices from a source node to gather information about the traffic
queues at each network device that is examined by the probe packet.
The computer readable program codes may also base a routing table
at each network device that receives the probe packet on the
gathered information for respective each traffic queue.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] FIG. 1 is a schematic block diagram of a system to
investigate congestion in a computer network in accordance with the
invention.
[0016] FIG. 2 is a flowchart illustrating method aspects according
to the invention.
[0017] FIG. 3 is a flowchart illustrating method aspects according
to the method of FIG. 2.
[0018] FIG. 4 is a flowchart illustrating method aspects according
to the method of FIG. 2.
[0019] FIG. 5 is a flowchart illustrating method aspects according
to the method of FIG. 2.
[0020] FIG. 6 is a flowchart illustrating method aspects according
to the method of FIG. 2.
[0021] FIG. 7 is a flowchart illustrating method aspects according
to the method of FIG. 5.
[0022] FIG. 8 is a flowchart illustrating method aspects according
to the method of FIG. 2.
DETAILED DESCRIPTION OF THE INVENTION
[0023] The invention will now be described more fully hereinafter
with reference to the accompanying drawings, in which preferred
embodiments of the invention are shown. Like numbers refer to like
elements throughout, like numbers with letter suffixes are used to
identify similar parts in a single embodiment, and letter suffix
lower case n is a variable that indicates an unlimited number of
similar elements.
[0024] With reference now to FIG. 1, a system 10 to investigate
congestion in a computer network 12 is initially described. The
system 10 is a programmable apparatus that stores and manipulates
data according to an instruction set as will be appreciated by
those of skill in the art.
[0025] In one embodiment, the system 10 includes a communications
network(s) 12, which enables a signal, e.g. data packet, probe
packet, and/or the like, to travel anywhere within, or outside of,
system 10. The communications network 12 is wired and/or wireless,
for example. The communications network 12 is local and/or global
with respect to system 10, for instance.
[0026] The system 10 includes network devices 14a-14n to route data
packets throughout the network 12. The network devices 14a-14n are
computer network equipment such as switches, network bridges,
routers, and/or the like. The network devices 14a-14n can be
connected together in any configuration to form the communications
network 12, as will be appreciated by those of skill in the
art.
[0027] The system 10 may further include a source node 16 that
sends data packets to any of the network devices 14a-14n. There can
be any number of source nodes 16 in the system 10. The source node
16 is any piece of computer equipment that is able to send data
packets to the network devices 14a-14n.
[0028] The system 10 can also include a routing table 18a-18n at
each respective network device 14a-14n. In another embodiment, the
route the data packets are sent by any network device 14a-14n is
based upon each respective routing table 18a-18n.
[0029] The network devices 14a-14n can be members of at least one
virtual local area network 20. The virtual local area network 20
permits the network devices 14a-14n to be configured and/or
reconfigured with less regard for each network devices' 14a-14n
physical characteristics as such relates to the communications
network's 12 topology, as will be appreciated by those of skill in
the art. In another embodiment, the source node 16 adds a header to
the data packets in order to define the virtual local area network
20.
[0030] In one embodiment, the source node 16 sends a probe
packet(s) to the network devices 14a-14n to gather information
about the traffic queues at each network device that receives the
probe packet(s). The the routing table 18a-18n at each network
device 14a-14n that receives the probe packet may be based upon the
gathered information for each respective traffic queue.
[0031] In one embodiment, the network devices 14a-14n are members
of at least one virtual local area network 20. In another
embodiment, the probe packets include a layer 2 flag and/or
sequence/flow/source node IDs.
[0032] Each network device 14a-14n can ignore the probe packet if
it is busy. In one configuration, at least one of the network
devices 14a-14n provides its extended queue status to the source
node 16 in response to receiving the probe packet.
[0033] One of the network devices 14a-14n may provide its extended
queue status to other network devices in response to receiving the
probe packet. The extended queue status can include the number of
pings from any flow ID received since the last queue change, the
number of packets forwarded since the last queue change, and/or
pointers to a complete network device core dump.
[0034] In one embodiment, if an extended queue status exceeds a
threshold level, the source node 16 updates the routing table
18a-18n to rebalance traffic loads within the VLAN 20. The probe
packet can be sent in response to the source node 16 receiving a
threshold number of congestion notification messages in a given
time interval.
[0035] In one embodiment, the system 10 additionally includes a
destination node 22 that works together with the source node 16 to
determine the route the data packets follow through network 12.
There can be any number of destination nodes 22 in system 10.
[0036] The source node 16 may be configured to collect congestion
notification messages from the network devices 14a-14n, and map the
collected congestion notification messages to the network topology.
The system 10 may also include a filter 24 that controls which
portions of the congestion notification messages from the network
devices 14a-14n are used by the source node 16. Thus, the source
node 16 can route around any network device 14a-14n for which the
collected congestion notification messages reveal a history of
congestion.
[0037] In one embodiment, the source node 16 routes to, or around,
any network device 14a-14n based upon a link cost indicator 26. The
system 10 can further include a destination node 22 that selects
the order of the routes.
[0038] Another aspect of the invention is a method to investigate
congestion in a computer network 12, which is now described with
reference to flowchart 30 of FIG. 2. The method begins at Block 32
and may include sending a probe packet to network devices from a
source node to gather information about the traffic queues at each
network device that is examined by the probe packet at Block 34.
The method may also include basing a routing table at each network
device that receives the probe packet on the gathered information
for respective each traffic queue at Block 36. The method ends at
Block 38.
[0039] In another method embodiment, which is now described with
reference to flowchart 40 of FIG. 3, the method begins at Block 42.
The method may include the steps of FIG. 2 at Blocks 34 and 36. The
method may additionally include organizing the network devices into
a virtual local area network at Block 44. The method ends at Block
46.
[0040] In another method embodiment, which is now described with
reference to flowchart 48 of FIG. 4, the method begins at Block 50.
The method may include the steps of FIG. 2 at Blocks 34 and 36. The
method may additionally include structuring the probe packets to
include at least one of a layer 2 flag and sequence/flow/source
node IDs at Block 52. The method ends at Block 54.
[0041] In another method embodiment, which is now described with
reference to flowchart 56 of FIG. 5, the method begins at Block 58.
The method may include the steps of FIG. 2 at Blocks 34 and 36. The
method may additionally include sending an extended queue status of
at least one of the network devices to the source node in response
to receiving the probe packet at Block 60. The method ends at Block
62.
[0042] In another method embodiment, which is now described with
reference to flowchart 64 of FIG. 6, the method begins at Block 66.
The method may include the steps of FIG. 2 at Blocks 34 and 36. The
method may additionally include providing the extended queue status
of a network device to other network devices in response to the
network device receiving the probe packet at Block 68. The method
ends at Block 70.
[0043] In another method embodiment, which is now described with
reference to flowchart 72 of FIG. 7, the method begins at Block 74.
The method may include the steps of FIG. 5 at Blocks 34, 36, and
60. The method may additionally comprise including at least one of
the number of pings from any flow ID received since the last queue
change, the number of packets forwarded since the last queue
change, and/or pointers to a complete network device core dump as
part of the extended queue status at Block 76. The method ends at
Block 78.
[0044] In another method embodiment, which is now described with
reference to flowchart 80 of FIG. 8, the method begins at Block 82.
The method may include the steps of FIG. 5 at Blocks 34, 36, and
60. The method may additionally include updating the routing table
to rebalance traffic loads via the source node if the extended
queue status exceeds a threshold level at Block 84. The method ends
at Block 86.
[0045] In view of the foregoing, the system 10 addresses the
investigation of congestion in computer networks 12. For example,
large converged networks are prone to congestion and poor
performance because they cannot sense and react to potential
congestion conditions. System 10 provides a proactive scheme for
probing network congestion points, identifying potential congestion
areas in advance, and/or preventing them from forming by rerouting
traffic along different paths.
[0046] In other words, system 10 uses proactive source-based
routing, which incorporates an active feedback request command that
takes snapshots of the state of the network and uses this
information to prevent congestion or other traffic flow problems
before they occur. In this approach the source node 16, e.g.
traffic source, actively monitors the end-to-end traffic flows by
inserting a probing packet, called "feedback request", into the
data stream at periodic intervals. This probe packet will traverse
the network 12 (the VLANs plus any alternative paths) and collect
information on the traffic queue loads.
[0047] In one embodiment, system 10 does not require congestion
notification messages ("CNM") in order to work. In another
embodiment, in a network which uses CNMs, the probing can also be
triggered by a source having received more than a certain number of
CNMs in a given time interval.
[0048] A related problem in converged networks is the monitoring
and control of adaptive routing fabrics. Most industry standard
switches are compliant with IEEE 802.1Qau routing mechanisms.
However, they fail to offer a means for delivering adaptive
feedback information to the traffic sources before congestion
arises in the network.
[0049] System 10 addresses the foregoing and greatly enhances the
speed of congestion feedback on layer 2 networks, and provides the
new function of anticipating probable congestion points before they
occur.
[0050] In one embodiment, the source node 16 autonomously issues a
feedback request command. In another embodiment, the source node 16
begins to issue feedback request after receiving a set number of
congestion notification messages, e.g. as defined in Quantized
Congestion Notification (QCN). When feedback requests are returned,
system 10 can either count the number of responses per flow ID
(stateful approach) or allow the responses to remain anonymous
(stateless approach).
[0051] In one embodiment, the source node 16 injects a feedback
request packet into the network 12 with a layer 2 flag and
sequence/flow/RP IDs. The network device 14a-14n receives the
feedback request. If the network device 14a-14n is busy it may
disregard the request, if not, it increments a counter indicating
that a feedback request packet has been received. The network
device 14a-14n then dumps its extended queue status information and
returns this data back to the source node 16 that originated the
feedback request packet. In another embodiment, the network device
14a-14n may also be set to forward the feedback request to other
nodes in the network 12.
[0052] In one embodiment, the extended queue status may include the
number of pings from any flow ID received since the last queue
change, the number of packets forwarded since the last queue
change, and/or the pointers to a complete CP core dump.
[0053] In one embodiment, the feedback requests may be triggered by
QCN frames, so that any rate-limiting traffic flows are probed. For
example, system 10 might send one feedback request frame for every
N kilobytes of data sent per flow (e.g. N=750 kB). This provides
the potential for early response to pending congestion points.
Using the information obtained from feedback requests, source
adaptive routing may be employed to stop network 12 congestion
before it happens. This information also makes it possible to
optimize traffic flows according to latency, throughput, or other
user requirements.
[0054] In one embodiment, system 10 uses QCN messaging on a
converged network. The detailed queue information is already
available in the network device 14a-14n, e.g. switch CP, but it
needs to be formatted and collected by the source node 16.
[0055] The performance overhead has been demonstrated to be less
than 1% for feedback request monitoring in software. The overhead
limits can be further reduced if desired by allowing the source
node 16 and network devices 14a-14n to adjust the frequency of
feedback control requests. This approach is further enhanced the
value of enabled switch fabrics and allows for more effective use
of source based adaptive routing.
[0056] In one embodiment, in a CEE/FCoE network 12 having a
plurality of VLANs 20 each having a plurality of network devices
14a-14n, e.g. switches, which enable paths over which traffic can
be routed through the network, a method for locating potential
congestion points in the network is described.
[0057] As noted above, large converged networks do not define
adequate means to control network congestion, leading to traffic
delays, dropped data frames, and poor performance. The conventional
hop-by-hop routing is not efficient at dealing with network
congestion, especially when a combination of storage and networking
traffic is placed over a common network, resulting in new and
poorly characterized traffic statistics. If the benefits of
converged networking are to be realized, a new method of traffic
routing is required. To address such, system 10 uses a source
based, reactive, and adaptive routing scheme.
[0058] In one embodiment, system 10 adds a virtual LAN (VLAN) 20
routing table 18a-18n in every network device 14a-14n, e.g.
switches. The VLAN 20 is defined by a 12 bit header field appended
to all packets (hence this is a source-based routing scheme), plus
a set of routing table 18a-18n entries (in all the switches) that
can route the VLANs.
[0059] The 12 bit VLAN 20 ID is in addition to the usual packet
header fields, and it triggers the new VLAN 20 routing scheme in
each network device 14a-14n. Each network device 14a-14n has its
own routing entry for every active VLAN 20.
[0060] In one embodiment, source node 16 and destination node 22
use a global selection function to decide the optimal end-to-end
path for the traffic flows. The optimal end-to-end path is then
pre-loaded into the network devices 14a-14n, e.g. switches, which
are members of this VLAN 20.
[0061] In one embodiment, the VLAN 20 table 18a-18n is adaptive and
will be periodically updated. The refresh time of the routing table
18a-18n can be varied, but will probably be at least a few seconds
for a reasonably large number (4,000 or so) of VLANs 20. The data
traffic subject to optimization will use the VLANs 20 as configured
by the controlling sources/applications 16.
[0062] In one embodiment, congestion notification messages (CNMs)
from the network devices 14a-14n, e.g. fabric switches, are
collected by the traffic source 16, marking the switch and port
locations based on the port ID. Every traffic source 16 builds a
history of CNMs that it has received, which is mapped to the
network topology. Based on the source's 16 historical mapping of
global end-to-end paths, the source will reconfigure any overloaded
paths, defined by the VLAN 20 tables 18a-18n, to route around the
most persistent congestion points (signaled by the enabled
switches).
[0063] In one embodiment, for each destination, the source 16 knows
all the possible paths a packet can take. The source 16 can then
evaluate the congestion level along each of these paths and choose
the one with the smallest cost, and therefore the method is
adaptive.
[0064] In another embodiment, the order in which the paths are
selected is given by the destination 22. In the case that no CNMs
are received, the source 16 will default to the same path used by
conventional and oblivious methods.
[0065] In one embodiment, if the default path is congested, the
alternative paths are checked next (by comparing their congestion
cost), starting with the default one, in a circular search, until a
non-congested path is found. Otherwise, the first path with the
minimum congestion cost is chosen.
[0066] In another embodiment, the CNMs are used as link cost
indicators 26. System 10 defines both a global and local method of
cost weighting, plus a filtering scheme to enhance performance.
[0067] In this manner, system 10 can determine where the most
congested links are located in the network 12. For each destination
22, the source 16 knows all the possible paths a packet can take.
The source 16 can then evaluate the congestion level along each of
these paths and choose the one with the smallest cost and therefore
the method is adaptive.
[0068] In one embodiment, the system 10 uses at least one of two
different methods of computing the path cost. The first is a global
price, which is the (weighted) sum of the congestions levels on
each link of the path. The other is the local price, which is the
maximum (weighted) congestion level of a link of the path.
[0069] The intuition behind the local price method is that a path
where a single link experiences heavy congestion is worse than a
path where multiple links experience mild congestion. On the other
hand, a path with two heavily congested links is worse than a path
with a single heavily congested link.
[0070] The intuition behind using a global price method is that the
CNMs received from distant network devices 14a-14n, e.g. switches,
are more informative than those received from switches that are
close to the source 16. This happens because the congestion appears
on the links that are likely to concentrate more flows (i.e. the
links that are farther away from the source).
[0071] In one embodiment, to avoid high frequency noise which could
lead to instabilities in the network devices 14a-14n, e.g. switch,
updating process, the system 10 applies filter 24 to the incoming
stream of CNMs. The filter 24 is a low pass filter, for example.
The filter 24 would have a running time window to average and
smooth the CNM stream.
[0072] In one embodiment, periodically the source 16 will refresh,
and if necessary, update the VLAN 20 path information in the
affected network devices 14a-14n. In another embodiment, the
optimal path routing is calculated by the end points, e.g. source
16 and destination 22, and refreshed periodically throughout the
switch fabric.
[0073] The system 10 can be implemented in hardware, software,
and/or firmware. Another aspect of the invention is a computer
readable program codes coupled to tangible media to investigate
congestion in a computer network 12. The computer readable program
codes may be configured to cause the program to send a probe packet
to network devices 14a-14n from a source node 16 to gather
information about the traffic queues at each network device that is
examined by the probe packet. The computer readable program codes
may also base a routing table 18a-18n at each network device
14a-14n that receives the probe packet on the gathered information
for respective each traffic queue.
[0074] As will be appreciated by one skilled in the art, aspects of
the invention may be embodied as a system, method or computer
program product. Accordingly, aspects of the invention may take the
form of an entirely hardware embodiment, an entirely software
embodiment (including firmware, resident software, micro-code,
etc.) or an embodiment combining software and hardware aspects that
may all generally be referred to herein as a "circuit," "module" or
"system." Furthermore, aspects of the invention may take the form
of a computer program product embodied in one or more computer
readable medium(s) having computer readable program code embodied
thereon.
[0075] Any combination of one or more computer readable medium(s)
may be utilized. The computer readable medium may be a computer
readable signal medium or a computer readable storage medium. A
computer readable storage medium may be, for example, but not
limited to, an electronic, magnetic, optical, electromagnetic,
infrared, or semiconductor system, apparatus, or device, or any
suitable combination of the foregoing. More specific examples (a
non-exhaustive list) of the computer readable storage medium would
include the following: an electrical connection having one or more
wires, a portable computer diskette, a hard disk, a random access
memory (RAM), a read-only memory (ROM), an erasable programmable
read-only memory (EPROM or Flash memory), an optical fiber, a
portable compact disc read-only memory (CD-ROM), an optical storage
device, a magnetic storage device, or any suitable combination of
the foregoing. In the context of this document, a computer readable
storage medium may be any tangible medium that can contain, or
store a program for use by or in connection with an instruction
execution system, apparatus, or device.
[0076] A computer readable signal medium may include a propagated
data signal with computer readable program code embodied therein,
for example, in baseband or as part of a carrier wave. Such a
propagated signal may take any of a variety of forms, including,
but not limited to, electro-magnetic, optical, or any suitable
combination thereof. A computer readable signal medium may be any
computer readable medium that is not a computer readable storage
medium and that can communicate, propagate, or transport a program
for use by or in connection with an instruction execution system,
apparatus, or device.
[0077] Program code embodied on a computer readable medium may be
transmitted using any appropriate medium, including but not limited
to wireless, wireline, optical fiber cable, RF, etc., or any
suitable combination of the foregoing.
[0078] Computer program code for carrying out operations for
aspects of the present invention may be written in any combination
of one or more programming languages, including an object oriented
programming language such as Java, Smalltalk, C++ or the like and
conventional procedural programming languages, such as the "C"
programming language or similar programming languages. The program
code may execute entirely on the user's computer, partly on the
user's computer, as a stand-alone software package, partly on the
user's computer and partly on a remote computer or entirely on the
remote computer or server. In the latter scenario, the remote
computer may be connected to the user's computer through any type
of network, including a local area network (LAN) or a wide area
network (WAN), or the connection may be made to an external
computer (for example, through the Internet using an Internet
Service Provider).
[0079] Aspects of the invention are described above with reference
to flowchart illustrations and/or block diagrams of methods,
apparatus (systems) and computer program products according to
embodiments of the invention. It will be understood that each block
of the flowchart illustrations and/or block diagrams, and
combinations of blocks in the flowchart illustrations and/or block
diagrams, can be implemented by computer program instructions.
These computer program instructions may be provided to a processor
of a general purpose computer, special purpose computer, or other
programmable data processing apparatus to produce a machine, such
that the instructions, which execute via the processor of the
computer or other programmable data processing apparatus, create
means for implementing the functions/acts specified in the
flowchart and/or block diagram block or blocks.
[0080] These computer program instructions may also be stored in a
computer readable medium that can direct a computer, other
programmable data processing apparatus, or other devices to
function in a particular manner, such that the instructions stored
in the computer readable medium produce an article of manufacture
including instructions which implement the function/act specified
in the flowchart and/or block diagram block or blocks.
[0081] The computer program instructions may also be loaded onto a
computer, other programmable data processing apparatus, or other
devices to cause a series of operational steps to be performed on
the computer, other programmable apparatus or other devices to
produce a computer implemented process such that the instructions
which execute on the computer or other programmable apparatus
provide processes for implementing the functions/acts specified in
the flowchart and/or block diagram block or blocks.
[0082] The flowchart and block diagrams in the Figures illustrate
the architecture, functionality, and operation of possible
implementations of systems, methods and computer program products
according to various embodiments of the present invention. In this
regard, each block in the flowchart or block diagrams may represent
a module, segment, or portion of code, which comprises one or more
executable instructions for implementing the specified logical
function(s). It should also be noted that, in some alternative
implementations, the functions noted in the block may occur out of
the order noted in the figures. For example, two blocks shown in
succession may, in fact, be executed substantially concurrently, or
the blocks may sometimes be executed in the reverse order,
depending upon the functionality involved. It will also be noted
that each block of the block diagrams and/or flowchart
illustration, and combinations of blocks in the block diagrams
and/or flowchart illustration, can be implemented by special
purpose hardware-based systems that perform the specified functions
or acts, or combinations of special purpose hardware and computer
instructions.
[0083] The terminology used herein is for the purpose of describing
particular embodiments only and is not intended to be limiting of
the invention. As used herein, the singular forms "a", "an" and
"the" are intended to include the plural forms as well, unless the
context clearly indicates otherwise. It will be further understood
that the terms "comprises" and/or "comprising," when used in this
specification, specify the presence of stated features, integers,
steps, operations, elements, and/or components, but do not preclude
the presence or addition of one or more other features, integers,
steps, operations, elements, components, and/or groups thereof.
[0084] While the preferred embodiment to the invention has been
described, it will be understood that those skilled in the art,
both now and in the future, may make various improvements and
enhancements which fall within the scope of the claims which
follow. These claims should be construed to maintain the proper
protection for the invention first described.
* * * * *