U.S. patent application number 15/591013 was filed with the patent office on 2017-08-24 for hybrid buffer management scheme for immutable pages.
The applicant listed for this patent is Samsung Electronics Co., Ltd.. Invention is credited to Yang Seok Ki, Sang Won Lee.
Application Number | 20170242858 15/591013 |
Document ID | / |
Family ID | 53044740 |
Filed Date | 2017-08-24 |
United States Patent
Application |
20170242858 |
Kind Code |
A1 |
Lee; Sang Won ; et
al. |
August 24, 2017 |
HYBRID BUFFER MANAGEMENT SCHEME FOR IMMUTABLE PAGES
Abstract
Exemplary embodiments provide a database query processing system
comprising a hybrid buffer pool stored in a computer main memory
coupled to a processor, the hybrid buffer pool comprising: a shared
buffer pool of page frames that contain dirty data pages that are
modified and will be written back to storage; and an immutable
buffer pool that temporarily contains read-only data pages from the
storage, wherein the shared buffer pool is different from the
immutable buffer pool. A page multiplexer that identifies which
ones of the data pages from storage to store in the immutable
buffer pool based at least in part on information from a query
processor.
Inventors: |
Lee; Sang Won; (Menlo Park,
CA) ; Ki; Yang Seok; (Palo Alto, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Samsung Electronics Co., Ltd. |
Gyeonggi-do |
|
KR |
|
|
Family ID: |
53044740 |
Appl. No.: |
15/591013 |
Filed: |
May 9, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14281453 |
May 19, 2014 |
|
|
|
15591013 |
|
|
|
|
61902026 |
Nov 8, 2013 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06F 12/0804 20130101;
G06F 2212/657 20130101; G06F 13/1673 20130101; G06F 16/2237
20190101; G06F 16/24552 20190101; H05K 5/03 20130101; H05K 5/069
20130101; G06F 12/1009 20130101; H05K 7/20536 20130101; H05K 5/04
20130101 |
International
Class: |
G06F 17/30 20060101
G06F017/30; G06F 12/1009 20060101 G06F012/1009; G06F 13/16 20060101
G06F013/16 |
Claims
1. A database query processing system, comprising: a hybrid buffer
pool stored in a computer main memory coupled to a processor, the
hybrid buffer pool comprising: a shared buffer pool of page frames
that contain dirty data pages that are modified and will be written
back to storage; and an immutable buffer pool that temporarily
contains read-only data pages from the storage, wherein the shared
buffer pool is different from the immutable buffer pool; and a page
multiplexer that identifies which ones of the data pages from
storage to store in the immutable buffer pool based at least in
part on information from a query processor.
2. The database query processing system of claim 1, wherein the
immutable buffer pool comprises a circular array of fixed-sized
circular page buffers, each comprising a plurality of page
frames.
3. The database query processing system of claim 2, wherein each of
the circular page buffers is identified using an array index, where
indices of the array index enumerate the circular page buffers.
4. The database query processing system of claim 3, wherein the
array index includes an indication of whether each of the circular
page buffers is used or free.
5. The database query processing system of claim 2, further
comprising an immutable buffer manager that assigns at least one of
the circular page buffers to a requesting process.
6. The database query processing system of claim 5, wherein
assignment of the at least one circular page buffer comprises:
responsive to determining that this is the first circular page
buffer assigned to the process, assigning a free circular page
buffer to the process; and responsive to determining that no free
circular page buffer is available, assigning a page frame from the
shared buffer pool to the process; and responsive to determining
this is not the first circular page buffer assigned to the process,
determining if the process is in single buffer mode, and if so,
assigning a page frame from the shared buffer pool to the
process.
7. The database query processing system of claim 5, further
comprising allocation of at least one page frame from a circular
page buffer assigned to the process by: responsive to finding a
free page frame from the circular page buffer, assigning the free
page frame to the process; responsive to not finding a free page
frame, determining if the process is in single buffer mode, and if
so, assigning a page frame from the shared buffer pool to the
process; and responsive to determining that the process is not in
single buffer mode, assigning another circular page buffer to the
process.
8. The database query processing system of claim 1, wherein
conditions that must be satisfied in order for the page multiplexer
to identify the data pages for storage in the immutable buffer pool
comprise: any data page in the long tail that has no spatial or
temporal locality; any data page resulting from read queries with
weak consistency; and any data page that is accessed through a
table index for a point or range query.
9. The database query processing system of claim 1, wherein the
immutable buffer pool is a one-time read-only memory space that is
recycled at any time after a query lifespan.
10. The database query processing system of claim 1, wherein the
shared buffer pool stores long-lived shared data pages, and the
immutable buffer pool stores short-lived read-only data pages that
avoids delay due to eviction of dirty data pages from the shared
buffer pool.
11. The database query processing system of claim 1, wherein the
page multiplexer receives information from the query processor in
the form of query hints.
12. A database server having access to a storage of data pages, a
processor; a main memory coupled to the processor, the main memory
including a hybrid buffer pool, comprising: a shared buffer pool of
page frames that contains dirty data pages that are modified and
will be written back to storage; and an immutable buffer pool that
temporarily contains read-only data pages from the storage, wherein
the shared buffer pool is different from the immutable buffer pool;
and a page multiplexer executed by the processor that identifies
which ones of the data pages from storage to store in the immutable
buffer pool based at least in part on information from a query
processor.
13. The database server of claim 11, wherein the immutable buffer
pool comprises a circular array of fixed-sized circular page
buffers, each comprising a plurality of page frames.
14. The database server of claim 13, wherein each of the circular
page buffers is identified using an array index, where indices of
the array index enumerate the circular page buffers.
15. The database server of claim 14, wherein the array index
includes an indication of whether each of the circular page buffers
is used or free.
16. The database server of claim 13, further comprising an
immutable buffer manager executed by the processor that assigns at
least one of the circular page buffers to a requesting process.
17. The database server of claim 16, wherein assignment of the at
least one circular page buffer comprises: responsive to determining
that this is the first circular page buffer assigned to the
process, assigning a free circular page buffer to the process; and
responsive to determining that no free circular page buffer is
available, assigning a page frame from the shared buffer pool to
the process; and responsive to determining this is not the first
circular page buffer assigned to the process, determining if the
process is in single buffer mode, and if so, assigning a page frame
from the shared buffer pool to the process.
18. The database server of claim 16, further comprising allocation
of at least one page frame from a circular page buffer assigned to
the process by: responsive to finding a free page frame from the
circular page buffer, assigning the free page frame to the process;
responsive to not finding a free page frame, determining if the
process is in single buffer mode, and if so, assigning a page frame
from the shared buffer pool to the process; and responsive to
determining that the process is not in single buffer mode,
assigning another circular page buffer to the process.
19. The database server of claim 12, wherein conditions that must
be satisfied in order for the page multiplexer to identify the data
pages for storage in the immutable buffer pool comprise: any data
page in the long tail that has no spatial or temporal locality; any
data page resulting from read queries with weak consistency; and
any data page that is accessed through a table index for a point or
range query.
20. The database server of claim 12, wherein the immutable buffer
pool is a one-time read-only memory space that is recycled at any
time after a query lifespan.
21. The database server of claim 12, wherein the shared buffer pool
stores long-lived shared data pages, and the immutable buffer pool
stores short-lived read-only data pages that avoids delay due to
eviction of dirty data pages from the shared buffer pool.
22. The database server of claim 12, wherein the page multiplexer
receives information from the query processor in the form of query
hints.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of U.S. Provisional
patent application Ser. No. 14/281,453, filed May 19, 2014, and is
incorporated herein by reference.
BACKGROUND
[0002] Most database systems preallocate memory space and manage
for buffering and caching data pages from storage. Modern database
buffers and file system caches include a pool of page frames which
can consume several gigabytes of memory space. This memory space is
used to stage data from storage for query processing and to provide
fast accesses to data. For best performance, these page frames are
typically managed by a buffer replacement algorithm such as least
recently used (LRU) scheme in order to keep data most likely to be
accessed in the future in the memory. In real environments,
numerous processes contend for these page frames, and the buffer
replacement algorithm should be designed sophisticatedly. However,
the exemplary embodiments observe that database systems often
cannot saturate the storage device (i.e., SSD) regardless of client
scale. An in-depth performance analysis of database systems reveal
that page-reads are often delayed by preceding page-writes when
there is high concurrency among reads and writes. This "read
blocked by write" problem can negatively impact CPU
utilization.
BRIEF SUMMARY
[0003] Exemplary embodiments provide a database query processing
system comprising a hybrid buffer pool stored in a computer main
memory coupled to a processor, the hybrid buffer pool comprising: a
shared buffer pool of page frames that contain dirty data pages
that are modified and will be written back to storage; and an
immutable buffer pool that temporarily contains read-only data
pages from the storage, wherein the shared buffer pool is different
from the immutable buffer pool. A page multiplexer that identifies
which ones of the data pages from storage to store in the immutable
buffer pool based at least in part on information from a query
processor.
BRIEF DESCRIPTION OF SEVERAL VIEWS OF THE DRAWINGS
[0004] These and/or other features and utilities of the present
general inventive concept will become apparent and more readily
appreciated from the following description of the embodiments,
taken in conjunction with the accompanying drawings of which:
[0005] FIG. 1 is a block diagram illustrating a database query
processing system having a hybrid buffer pool having a shared
buffer pool and an immutable buffer pool in accordance with the
exemplary embodiments.
[0006] FIG. 2 is a block diagram illustrating components comprising
an immutable buffer pool.
[0007] FIG. 3 is a table illustrating conditions that must be
satisfied in order for the page multiplexer to identify data pages
that can be stored in the immutable buffer pool.
[0008] FIG. 4A is a flow diagram illustrating assignment of a
circular page buffer to a process by the immutable buffer
manager.
[0009] FIG. 4B is a flowchart illustrating allocation of an
immutable buffer page frame to a process by the immutable buffer
manager.
DETAILED DESCRIPTION
[0010] Reference will now be made in detail to the embodiments of
the present general inventive concept, examples of which are
illustrated in the accompanying drawings, wherein like reference
numerals refer to the like elements throughout. The embodiments are
described below in order to explain the present general inventive
concept while referring to the figures.
[0011] Advantages and features of the present invention and methods
of accomplishing the same may be understood more readily by
reference to the following detailed description of embodiments and
the accompanying drawings. The present general inventive concept
may, however, be embodied in many different forms and should not be
construed as being limited to the embodiments set forth herein.
Rather, these embodiments are provided so that this disclosure will
be thorough and complete and will fully convey the concept of the
general inventive concept to those skilled in the art, and the
present general inventive concept will only be defined by the
appended claims. In the drawings, the thickness of layers and
regions are exaggerated for clarity.
[0012] The use of the terms "a" and "an" and "the" and similar
referents in the context of describing the invention (especially in
the context of the following claims) are to be construed to cover
both the singular and the plural, unless otherwise indicated herein
or clearly contradicted by context. The terms "comprising,"
"having," "including," and "containing" are to be construed as
open-ended terms (i.e., meaning "including, but not limited to,")
unless otherwise noted.
[0013] The term "component" or "module", as used herein, means, but
is not limited to, a software or hardware component, such as a
field programmable gate array (FPGA) or an application specific
integrated circuit (ASIC), which performs certain tasks. A
component or module may advantageously be configured to reside in
the addressable storage medium and configured to execute on one or
more processors. Thus, a component or module may include, by way of
example, components, such as software components, object-oriented
software components, class components and task components,
processes, functions, attributes, procedures, subroutines, segments
of program code, drivers, firmware, microcode, circuitry, data,
databases, data structures, tables, arrays, and variables. The
functionality provided for the components and components or modules
may be combined into fewer components and components or modules or
further separated into additional components and components or
modules.
[0014] Unless defined otherwise, all technical and scientific terms
used herein have the same meaning as commonly understood by one of
ordinary skill in the art to which this invention belongs. It is
noted that the use of any and all examples, or exemplary terms
provided herein is intended merely to better illuminate the
invention and is not a limitation on the scope of the invention
unless otherwise specified. Further, unless defined otherwise, all
terms defined in generally used dictionaries may not be overly
interpreted.
[0015] This invention proposes a hybrid buffer management scheme to
solve the "read blocked by write" problem, so that database systems
can not only improve the CPU utilization in terms of cache hit
ratio and effective cycles but also fully leverage storage
performance potentials.
[0016] FIG. 1 is a block diagram illustrating a database query
processing system 10 having a hybrid buffer pool having a shared
buffer pool and an immutable buffer pool in accordance with the
exemplary embodiments. The database query processing system 10 may
include the client computer 12 in communication with the database
server 14 over a network (not shown). The database server 14 may
include typical computer components including a processor 16, a
memory 18, and high-speed storage 20 (e.g., solid state drive
(SSD)).
[0017] In operation, a user or an application on the client
computer 12 may issue queries, e.g., a SQL command, to the database
server 14. A query processor 22 executing on the processor 14
receives, parses and executes the query. Database systems in
general, including the database server 14, stage data pages 24 from
storage 20 into a shared buffer pool 26 in memory 18 in order to
serve the query. Therefore, the database server 14 allocates memory
space, referred to as page frames 28, to read in a number of data
pages 24 from storage 20. This shared buffer pool 26 is a common
space for data processing. In conventional database systems, the
shared buffer pool would contain both clean data pages from storage
20 that are read-only, and dirty data pages 30 that are modified
after a read that need to be written back to storage 20.
[0018] For best performance, the data pages in the shared buffer
pool 26 are typically managed by a buffer replacement algorithm,
such as least recently used (LRU) scheme in order to keep data most
likely to be accessed in the future in the memory. In the meantime,
the database server 14 writes back the dirty data pages 30 to
storage 20 in the background to free up the buffer space for future
data processing.
[0019] Regardless of buffer replacement algorithms, conventional
database servers sometimes faces the situation where dirty data
pages 30 need to be written back to storage 20 in order to allocate
a free space in the shared buffer pool 26 when the system workload
is high and the shared buffer pool 26 is full. As a result, a read
operation may be blocked in some systems until a write operation
completes even though the write and the read are independent each
other. The exemplary embodiments refer to this issue as a "read
blocked by write" problem. Note that this blocking of a read is
different from the situation where a data read is blocked until
preceding writes to the same data complete in order to preserve the
"read after write" consistency semantics.
[0020] For example, when MySQL cannot find a page to process in a
shared buffer pool, MySQL tries to secure a free page frame 28
before issuing the read request to storage. In general, a free page
frame 28 can be found in the shared buffer pool if free page frames
are collected in advance. However, when write backlogs are very
high, no free page frame 28 can be found in the shared buffer pool.
In this case, the database server traverses the LRU list of page
frames 28, starting from the oldest page frame to search a victim
page to write back to storage. This causes performance issues both
in the host and storage because all buffer accesses are serialized
on the traverse of LRU list to evict dirty data pages 30 and
because write back to storage take a long time.
[0021] This symptom is common for conventional commercial database
systems including Oracle as well as MySQL InnoDB. This
serialization limits not only the parallelism in database servers
but also the IO requests to storage. However, these performance
issues are not noticeable if the storage comprises a typical hard
disk drive (HDD) because I/O may already be saturated. In contrast,
database servers with high performance storage such as SSD suffer
from low utilization of host computing resources and storage
bandwidth.
[0022] According to the exemplary embodiments, a hybrid buffer
management scheme is proposed that address the serialization issue
of LRU list traverse for the shared buffer pool 26 and the "read
blocked by write" problem. The hybrid buffer management scheme may
include an immutable buffer pool 34 in memory 18, a page
multiplexer 36, and an immutable buffer manager 38 executing on a
processor 16. Although not shown, the page multiplexer 36 may also
execute on the processor 16.
[0023] Thus, a database hybrid buffer pool is provided comprising
both the shared buffer pool 26 containing dirty data pages 30 that
have been modified and will be written back to storage 20, as well
as the immutable buffer pool 34 that temporarily contains read-only
data pages 32 from storage 20, i.e., data pages accessed by a
read-only query. For instance, data pages accessed by a select
query without an update are read-only. The shared buffer pool 26
may store conventional long-lived shared data pages, while the
immutable buffer pool may store short-lived read-only data pages
32, where a short-lived data page is valid only during the lifespan
of a query.
[0024] The immutable buffer 34, which has minimal maintenance
overhead, avoids unnecessary delay due to the eviction of dirty
data pages 30 from the shared buffer pool 26. According to the
exemplary embodiment, the immutable buffer 34 is a temporary
buffer, assigned to a process, which is local, read-only,
short-lived, lock-free, and recyclable. This special buffer can be
used for data pages that satisfy certain conditions and can improve
the overall system and storage utilization by avoiding unnecessary
delay due to the eviction of dirty data pages 30, while prior
long-lived shared buffers (or caches) experience contentions for
free pages.
[0025] The page multiplexer 36 identifies which ones of the data
pages 24 from storage 20 to store in the immutable buffer pool 34
based at least in part on information from the query processor 22.
In one embodiment, the page multiplexer 36 receives information
from the query processor 22 in the form of query hints 39. As
described in FIG. 3, the query processor analyzes a given query and
gives hints if it is a read query; if it can use an index; and/or
if it accesses pages infrequently.
[0026] The hybrid buffer pool is provided for queries against
read-only (immutable) pages 32 to avoid the "read blocked by write"
problem and to avoid or minimize the issue of the serialization.
The hybrid buffer pool has several advantages since it does not
allow rarely accessed pages to enter the shared buffer pool 26. One
advantage is that the database server 14 can issue more read
requests to storage 20 since the read operations are not blocked by
the write operations. This increases the utilization of the storage
20 and the database server 14, and consequently improves the
throughput of the system and the response time of each query.
Another advantage is that this hybrid buffer pool is also
beneficial to other processes that need to read data pages into a
buffer because programs are less-often contending for free pages.
Yet a further advantage is that the hybrid buffer scheme can avoid
the time consuming buffer management overhead for cold pages so
that CPU time may be significantly saved.
[0027] FIG. 2 is a block diagram illustrating components comprising
an immutable buffer pool. According to one embodiment, the
immutable buffer pool 34' can be implemented as a circular array
200 of fixed-sized circular page buffers 202, each comprising a
plurality of page frames 208. In one embodiment, each of the
circular page buffers 202 may be identified using an array index
204, where indices of the array index 204 may enumerate the
circular page buffers 202. In one embodiment, the array index 204
(or another data structure) may include an indication of whether
each of the circular page buffers 202 is used or free.
[0028] In one embodiment, the size of circular array 202 should be
sufficiently large to support the number of concurrent processes
206 that the system supports. The size of circular page buffers 202
may be determined by factors including at least I/O bandwidth and
the number of concurrent processes.
[0029] The immutable buffer pool 34 can be used to store or stage
data pages 24 that satisfy predetermined conditions. Since the
immutable buffer pool 34 is a one-time read-only memory space, the
immutable buffer pool 34 can be recycled at any time after a query
lifespan.
[0030] FIG. 3 is a table illustrating conditions that must be
satisfied in order for the page multiplexer 36 to identify data
pages 24 that can be stored in the immutable buffer pool 34. In one
embodiment, the page multiplexer 36 may identify data pages that to
be stored in the mutable buffer pool based at least in part on
query hints 39 from the query processor. The page multiplexer 36
identifies and stores in the immutable buffer pool 34 any data
pages 24 that satisfy the following conditions 300:
[0031] 1) Any data page 24 in the long tail that has no spatial or
temporal locality, which is infrequently accessed--pages that
satisfy the conditions are typically accessed just once at a time
and not accessed afterward for long time, and consequently cache is
wasted;
[0032] 2) Any data page 24 resulting from read queries with weak
consistency. An example includes but is not limited to, a READ
COMMITTED mode in ANSI isolation level where the query results can
be different over time because the query is applied to a snapshot
of table at a certain time.
[0033] 3) Any data page that is accessed through a table index for
a point or range query. Examples include, but are not limited to,
leaf nodes of an Index Only Table of a storage engine (e.g., InnoDB
for MySQL), and randomly accessed data pages of heap file of an
Oracle database.
[0034] In one embodiment, the page multiplexer system does not
cache internal index nodes of the index tree to the immutable
buffer pool 34, but rather to the shared buffer pool 26 because
internal index nodes have high temporal locality.
[0035] In one embodiment, the immutable buffer manager 38 is
responsible for assigning circular page buffers 202 to requesting
processes 206, which in one embodiment may also include threads. In
one embodiment, the page multiplexer system may determine if data
pages for the process are for the immutable buffer first, and then
the immutable buffer manager assigns circular page buffers and page
frames to requesting processes.
[0036] FIG. 4A is a flow diagram illustrating assignment of a
circular page buffer to a process by the immutable buffer manager
38. The process may begin by the immutable buffer manager 38
receiving notification from the page multiplexer that a data page
24 for a requesting process 206 has been identified for staging in
the immutable buffer pool 34' (block 400).
[0037] The immutable buffer manager 48 determines if this will be
the first circular page buffer 202 assigned this process (block
402). If so, the immutable buffer manager 48 finds a free circular
page buffer 202 in the circular array 200 (block 406). In one
embodiment, the immutable buffer manager 48 finds a free circular
page buffer 202 that is under an array lock to avoid a race
condition in case of its multi-threaded implementation.
[0038] If this is not the first circular page buffer 202 assigned
to this process 206 (block 402), then immutable buffer manager 48
determines if the process 206 is in single buffer mode (block 404).
If the process 206 is in single buffer mode, then a page frame 28
from the shared buffer pool 26 is assigned to the process 206
(block 412).
[0039] If it is determined that this is the first circular page
buffer 202 assigned the process (block 402) or that the process 206
is not in single buffer mode (block 404), then immutable page
buffer 48 attempts to find a free circular page buffer 202 in the
circular array 200 (block 406). If it is determined that a free
circular page buffer 202 is available (block 408), then the
immutable buffer manager 48 assigns the free circular page buffer
202 to the process (block 410). In one embodiment, assignment of
the free circular page buffer 202 may further include assigning one
of the indices of the array index 204 of a free circular page
buffer 202 to the process 206.
[0040] If no free circular page buffer 202 is available, then then
the shared buffer pool 26 may be used and a page frame 28 from the
shared buffer pool 26 is assigned to the process 206 (block
412).
[0041] In one embodiment, the immutable buffer manager may assign a
page frame to the requesting process from the circular page buffer
when the multiplexer requests a page frame for a data page.
[0042] FIG. 4B is a flowchart illustrating allocation of a page
frame from an assigned circular page buffer to a process by the
immutable buffer manager 38. The process may begin by finding a
free page frame 208 from the circular page buffer 202 assigned to
the process 206 (block 450). If a free page frame 208 is found
(block 452), then the free page frame is assigned to the process
206 (block 454). If a page frame 208 cannot be found (block 452),
then it is determined if the process is in single buffer mode
(block 456). If the process 206 is in single buffer mode, then a
page frame 28 from the shared buffer pool 26 is assigned to the
process 206 (block 458). If the process 206 is not in single buffer
mode, then another circular page buffer 202 is assigned to the
process 206 (block 460) and the process continues by finding a free
page frame in the assigned circular page buffer (block 450). If the
process cannot get another circular page buffer, it can retry until
a timer expires or the number of retrials reaches a maximum.
[0043] Once a circular page buffer 202 is assigned to a process
206, the process 206 has an exclusive access to the circular page
buffer 202. In one embodiment, a single circular page buffer 202 is
assigned to a process 206 by default, but more than one circular
page buffer 202 can be assigned to the process 206, depending on
the system workload.
[0044] In one embodiment, the immutable buffer manager can
implement a fair-sharing mechanism to prevent one process from
using up the immutable buffer. For instance, if the process 206
waits on a page frame of a circular page buffer 206 for a
predetermined amount of time because all page frames in a circular
buffer are already assigned, the immutable buffer manager 38 can
assign additional circular buffers 202. Consequently, other
processes can have page frames assigned during the wait time
period. In a light loaded condition, the process 206 obtains a
sufficient number of circular page buffers 202 to avoid starvation
for buffers.
[0045] Once any process 206 starts to use the shared buffer pool
26, the immutable buffer manager 38 may not assign more than one
circular page buffer 202 to the processes 206 after that. If any
process already has more than one assigned circular page buffer,
the assigned circular page buffers 26 may remain assigned until the
used circular page buffers are recycled. In this exemplary
embodiment, serialization takes place only when an array index that
assigns a circular buffer from the circular buffer array to a
process is determined for a multi-threaded implementation, which is
very short and negligible.
[0046] The present invention has been described in accordance with
the embodiments shown, and there could be variations to the
embodiments, and any variations would be within the spirit and
scope of the present invention. For example, the exemplary
embodiment can be implemented using hardware, software, a computer
readable medium containing program instructions, or a combination
thereof. Software written according to the present invention is to
be either stored in some form of computer-readable medium such as a
memory, a hard disk, or a CD/DVD-ROM and is to be executed by a
processor. Accordingly, many modifications may be made by one of
ordinary skill in the art without departing from the spirit and
scope of the appended claims.
* * * * *