U.S. patent application number 12/062138 was filed with the patent office on 2008-10-09 for method and apparatus for writing binary data with low power consumption.
This patent application is currently assigned to The Hong Kong University of Science and Technology. Invention is credited to Oscar Chi Lim Au, Richard Yuk Ming Li.
Application Number | 20080250056 12/062138 |
Document ID | / |
Family ID | 39826633 |
Filed Date | 2008-10-09 |
United States Patent
Application |
20080250056 |
Kind Code |
A1 |
Au; Oscar Chi Lim ; et
al. |
October 9, 2008 |
METHOD AND APPARATUS FOR WRITING BINARY DATA WITH LOW POWER
CONSUMPTION
Abstract
Systems and methodologies are provided herein for representing
information in a data processing system with low power consumption.
As described herein, parity relationships between multiple nodes of
to-be-written binary information and original information can be
leveraged as described herein to reduce the amount of toggling
required to write information in a memory, thereby reducing power
consumption. Various algorithms for leveraging parity relationships
are described herein, including a Champagne Pyramid Parity Check
(CPPC) algorithm and a Tree-Based Parity Check (TBPC)
algorithm.
Inventors: |
Au; Oscar Chi Lim; (Hong
Kong, CN) ; Li; Richard Yuk Ming; (Hong Kong,
CN) |
Correspondence
Address: |
AMIN, TUROCY & CALVIN, LLP
1900 EAST 9TH STREET, NATIONAL CITY CENTER, 24TH FLOOR,
CLEVELAND
OH
44114
US
|
Assignee: |
The Hong Kong University of Science
and Technology
Hong Kong
CN
|
Family ID: |
39826633 |
Appl. No.: |
12/062138 |
Filed: |
April 3, 2008 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
60907513 |
Apr 4, 2007 |
|
|
|
Current U.S.
Class: |
1/1 ;
707/999.102; 707/E17.005 |
Current CPC
Class: |
G06T 2201/0061 20130101;
H04N 1/32203 20130101; H04N 2201/327 20130101; H04N 2201/3233
20130101; G06T 2201/0051 20130101; H04N 1/32229 20130101; G06T
1/0028 20130101; H04N 1/32256 20130101 |
Class at
Publication: |
707/102 ;
707/E17.005 |
International
Class: |
G06F 17/30 20060101
G06F017/30 |
Claims
1. A method of writing binary information in a memory, comprising:
identifying binary information to be stored by a memory; generating
a hierarchical data structure comprising a plurality of nodes, the
plurality of nodes comprise respective information bits currently
stored by the memory; determining respective parity values of nodes
in a bottom row of the hierarchical data structure; and toggling
binary values of one or more nodes in the hierarchical data
structure such that the respective parity values of the nodes in
the bottom row of the hierarchical data structure match respective
corresponding bits in the binary information to be stored by the
memory.
2. The method of claim 1, wherein the generating a hierarchical
data structure comprises generating one or more binary pyramid
structures, and a number of nodes at the bottom row of the one or
more binary pyramid structures equals a number of bits in the
identified binary information.
3. The method of claim 2, wherein the toggling comprises providing
toggling indications for respective nodes at the bottom row of the
binary pyramid structure for which a determined parity value for
the node differs from a corresponding bit of the identified binary
information;
4. The method of claim 3, wherein the toggling comprises:
identifying groups of consecutive nodes for which a toggling
indication is provided; replacing toggling indications provided by
respective nodes in an identified group of consecutive nodes with a
single toggling indication at the lowest common predecessor node to
each node in the identified group of consecutive nodes; and
toggling binary values of nodes in the one or more binary pyramid
structures for which toggling indications are provided.
5. The method of claim 2, wherein the determining comprises
determining a parity value for a node in a binary pyramid structure
by performing an exclusive-OR operation on the binary value of a
node, a parity value for a left source node to the node for which a
parity value is being calculated, a parity value for a right source
node to the node for which a parity value is being calculated, and
a parity value for an upper source node that is a common source
node of the left source node and the right source node.
6. The method of claim 1, wherein the generating a hierarchical
data structure comprises generating a complete N-ary tree
structure, and a bottom row of the tree structure corresponds to
leaf nodes of said tree structure.
7. The method of claim 6, wherein: the generating a hierarchical
data structure further comprises providing one or more said tree
structures such that a number of leaf nodes of the one or more tree
structures equals a number of bits in the identified binary
information; and the determining comprises determining parity
values for respective leaf nodes of the one or more tree structures
by determining parity of nodes in paths from the respective leaf
nodes to a root node.
8. The method of claim 7, wherein the toggling comprises: obtaining
a toggle array by comparing respective parity values for the leaf
nodes of the one or more tree structures and corresponding bits in
the identified binary information; constructing one or more toggle
trees corresponding to the one or more tree structures at least in
part by using respective elements of the toggle array as leaf
nodes; and toggling binary values of respective nodes of the one or
more tree structures corresponding to respective highest nodes in
the one or more toggle trees for which all leaf nodes indicate that
parity values for the leaf nodes do not match corresponding bits in
the identified binary information.
9. The method of claim 6, wherein the determining comprises
determining a parity value for a node in a tree structure by
performing an exclusive-OR operation between the binary value of a
node and a parity value for a parent node of the node for which a
parity value is being calculated.
10. The method of claim 1, wherein the information bits currently
stored by the memory correspond to a previous communication of
data, the identified binary information corresponds to a subsequent
communication of data, and the method further comprises conducting
the subsequent communication of data by transmitting one or more
toggling positions that transform data corresponding to the
previous communication of data to the identified binary data.
11. A computer-readable medium having stored thereon instructions
operable to perform the method of claim 1.
12. A system for recording binary information, comprising: a
champagne pyramid component that groups storage locations for
respective information bits in a memory into at least one binary
pyramid structure having a plurality of nodes respectively
corresponding to the storage locations; a parity check component
that determines parity values of respective nodes in a bottom row
of the at least one binary pyramid structure based on original
information written in the memory; and a toggling component that
toggles binary values of one or more nodes in the at least one
binary pyramid structures such that parity values of nodes in the
bottom row of the at least one binary pyramid structure match
corresponding bits in binary information to be recorded in the
memory.
13. The system of claim 12, further comprising a comparison
component that provides toggling indications for respective nodes
at the bottom row of the at least one binary pyramid structure for
which determined parity values for the respective nodes differ from
corresponding bits of the binary information to be recorded in the
memory.
14. The system of claim 13, further comprising a flavor adding
optimization component that identifies groups of consecutive nodes
in the bottom row of the at least one binary pyramid structure for
which a toggling indication is provided and replaces toggling
indications provided by respective nodes identified groups of
consecutive nodes with a single toggling indication at the lowest
common predecessor node to each node in the identified group of
consecutive nodes.
15. The system of claim 14, wherein the toggling component toggles
binary values of respective nodes in the at least one binary
pyramid structure for which toggling indications are provided.
16. A system for recording binary information, comprising: a tree
component that stores information in a memory as at least one
complete N-ary tree structure having a plurality of nodes
respectively corresponding to storage locations in the memory; a
parity check component that determines respective parity values for
leaf nodes of the at least one complete N-ary tree structure based
on original information written in the memory; and a modification
component that modifies binary values of one or more nodes in the
at least one complete N-ary tree structure such that the parity
values of the leaf nodes match corresponding bits in binary
information to be recorded in the memory.
17. The system of claim 16, further comprising a comparison
component that establishes one or more toggle arrays by comparing
respective parity values for the leaf nodes of the one or more
complete N-ary tree structures to corresponding bits in the binary
information to be recorded in the memory.
18. The system of claim 17, further comprising a fountain
investigation component that constructs one or more toggle trees
corresponding to the one or more complete N-ary tree structures,
wherein the one or more toggle trees utilize respective elements of
the one or more toggle arrays as leaf nodes.
19. The system of claim 18, further comprising a modification
location determination component that identifies nodes of the at
least one complete N-ary tree structure that correspond to
respective highest nodes in the one or more toggle trees for which
all leaf nodes indicate that parity values for the leaf nodes do
not match corresponding bits in the binary information to be
recorded in the memory.
20. A system for recording binary information, comprising: means
for providing binary information to be written in a memory; means
for providing a hierarchical data structure comprising a plurality
of nodes that represent original information written in the memory;
means for determining respective parity values of nodes at a bottom
row of the hierarchical data structure; and means for toggling
binary values of one or more nodes of the hierarchical data
structure such that the respective parity values of the nodes at
the bottom row of the hierarchical data structure match
corresponding bits in the provided binary information.
Description
CROSS-REFERENCE
[0001] This application claims the benefit of U.S. Provisional
Patent Application Ser. No. 60/907,513, filed on Apr. 4, 2007,
entitled "MULTIMEDIA WATERMARKING TECHNIQUES WITH LOW
DISTORTION."
TECHNICAL FIELD
[0002] The present disclosure relates generally to data processing,
and more particularly to techniques for low power consumption
memory design for data processing systems.
BACKGROUND
[0003] Power consumption is a major concern of modern development
of computer and/or other components capable of computing. This is
most apparent in battery-powered portable devices. People often
carry extra batteries, AC adapters, and battery rechargers to
ensure against a loss of functionality. Having to carry these
accessories and supplies decreases the convenience of the portable
devices. The need to carry extra batteries and power accessories
can be obviated in part by using larger (or more) batteries, but
this increases device bulk and thus decreases portability.
[0004] Reducing power requirements allows the use of smaller
batteries and/or decreases the frequency with which batteries must
be replaced or recharged. Using smaller batteries decreases device
bulk. Reducing frequency of replacement reduces the financial and
environmental cost of device ownership. Reducing the frequency of
recharging extends battery life and makes it more practical to
leave power accessories behind. In some cases, lower power
requirements increase the viability of solar power to replace or
supplement battery power, further enhancing the portability.
Reducing power consumption also reduces heat dissipation, so that
less bulk needs to be dedicated to removing heat from a device.
[0005] There are many approaches to reducing power requirements.
Advances in semiconductor manufacturing have permitted smaller and
more power efficient circuits. Advances in circuit design and
processor architecture have also reduced power requirements. Such
advances have reduced power requirements across all types of
devices including processors, memories, and interface devices.
[0006] In addition to these hardware-oriented approaches, there are
software-oriented approaches to reducing power requirements.
Considerable effort has been invested in designing instruction sets
and data formats for efficient use of available capacities for
computation, storage and communication. As these capacities are
used more efficiently, power requirements are reduced. However, as
dramatic as power reductions have been to date, further reductions
are desired to increase portability and convenience, reduce
environmental and financial costs, and achieve other
objectives.
SUMMARY
[0007] The following presents a simplified summary of the claimed
subject matter in order to provide a basic understanding of some
aspects of the claimed subject matter. This summary is not an
extensive overview of the claimed subject matter. It is intended to
neither identify key or critical elements of the claimed subject
matter nor delineate the scope of the claimed subject matter. Its
sole purpose is to present some concepts of the claimed subject
matter in a simplified form as a prelude to the more detailed
description that is presented later.
[0008] The present disclosure provides systems, components, and
methodologies for writing and/or recording information in memory
components. In accordance with various aspects described herein, a
hierarchical data structure, such as a binary pyramid structure
and/or an N-ary tree structure, is used to record information.
Information can be represented as the parity values of a set of
nodes in such a data structure, rather than individual nodes
themselves.
[0009] In one example, a Champagne Pyramid Parity Check (CPPC)
algorithm and/or a Tree-Based Parity Check (TBPC) algorithm can be
utilized to reduce the number of toggling operations required to
write data to a memory component (e.g., to reduce the number of
changes necessary in a memory for converting an original,
previously stored set of information to a to-be-written set of
information in the memory). The CPPC algorithm and/or the TBPC
algorithm can reduce the total number of binary nodes changes
required during the data writing process, thereby reducing memory
power consumption.
[0010] To the accomplishment of the foregoing and related ends,
certain illustrative aspects of the claimed subject matter are
described herein in connection with the following description and
the annexed drawings. These aspects are indicative, however, of but
a few of the various ways in which the principles of the claimed
subject matter can be employed. The claimed subject matter is
intended to include all such aspects and their equivalents. Other
advantages and novel features of the claimed subject matter can
become apparent from the following detailed description when
considered in conjunction with the drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] FIG. 1 is a high-level block diagram of a system for
recording binary information with low power consumption in
accordance with various aspects described herein.
[0012] FIG. 2 is a block diagram of a system that implements an
example technique for writing binary information in accordance with
various aspects described herein.
[0013] FIGS. 3-5 illustrate example data structures that can be
utilized in connection with one or more data processing techniques
described herein.
[0014] FIGS. 6-7 illustrate example update and toggle operations
that can be performed in connection with one or more data
processing techniques described herein.
[0015] FIG. 8 is a block diagram of a system that implements
another example technique for writing binary information in
accordance with various aspects described herein.
[0016] FIG. 9 illustrates an example data structure that can be
utilized for one or more data processing techniques described
herein.
[0017] FIGS. 10-11 illustrate respective operations that can be
performed in connection with one or more data processing techniques
described herein.
[0018] FIGS. 12-14 are flowcharts of respective methods for
low-power recording of binary information in accordance with
various aspects described herein.
[0019] FIG. 15 is a block diagram of an example operating
environment in which various aspects described herein can
function.
[0020] FIG. 16 is a block diagram of an example networked computing
environment in which various aspects described herein can
function.
DETAILED DESCRIPTION
[0021] The claimed subject matter is now described with reference
to the drawings, wherein like reference numerals are used to refer
to like elements throughout. In the following description, for
purposes of explanation, numerous specific details are set forth in
order to provide a thorough understanding of the claimed subject
matter. It may be evident, however, that the claimed subject matter
may be practiced without these specific details. In other
instances, well-known structures and devices are shown in block
diagram form in order to facilitate describing the claimed subject
matter.
[0022] As used in this application, the terms "component,"
"system," and the like are intended to refer to a computer-related
entity, either hardware, a combination of hardware and software,
software, or software in execution. For example, a component may
be, but is not limited to being, a process running on a processor,
a processor, an object, an executable, a thread of execution, a
program, and/or a computer. By way of illustration, both an
application running on a server and the server can be a component.
One or more components may reside within a process and/or thread of
execution and a component may be localized on one computer and/or
distributed between two or more computers. Also, the methods and
apparatus of the claimed subject matter, or certain aspects or
portions thereof, may take the form of program code (i.e.,
instructions) embodied in tangible media, such as floppy diskettes,
CD-ROMs, hard drives, or any other machine-readable storage medium,
wherein, when the program code is loaded into and executed by a
machine, such as a computer, the machine becomes an apparatus for
practicing the claimed subject matter. The components may
communicate via local and/or remote processes such as in accordance
with a signal having one or more data packets (e.g., data from one
component interacting with another component in a local system,
distributed system, and/or across a network such as the Internet
with other systems via the signal).
[0023] Turning now to FIG. 1, a block diagram of a system 100 for
recording binary information with low power consumption in
accordance with various aspects described herein is illustrated. In
one example, system 100 can include a memory component 110 for
storing binary data. In one example, the memory component 110
initially stores a set of original data. To facilitate efficient
recording of to-be-written data into the memory component 110, the
memory component 110 can comprise a parity check component 130
and/or a modification component 140.
[0024] In accordance with one aspect, the parity check component
130 can divide storage locations in the memory component 110 that
store respective bits of original data into groups. These groups
can then be respectively characterized by the parity of the binary
data values or bits stored at the locations that constitute the
respective groups. By way of specific, non-limiting example, binary
data can be represented as a binary digit "0" or "1," and the
parity of a group of such data can be even-odd parity such that,
for example, the parity of a group is "1" if the number of bits in
the group having a value of "1" is odd and "0" if the number is
even. It should be appreciated, however, that this constitutes
merely one way in which parity values can be obtained and that any
suitable technique could be used.
[0025] In accordance with another aspect, the parity of respective
bits of original data stored at the memory component 110 as
formulated by the parity check component 130 can be utilized as a
basis for representing to-be-written data. Further, the parity
check component 130 can formulate groups of data bits such that
they overlap, thereby allowing the parity of multiple groups to be
altered by toggling the value of a single bit in the group. By
doing so, the amount of toggling necessary to modify the original
data stored by the memory component 110 into the to-be-written data
can be reduced, thereby increasing embedding efficiency and
reducing the amount of power required for writing data to the
memory component 110. In accordance with various aspects described
herein, the parity check component 130 can utilize a variety of
algorithms for dividing a set of data bits into groups to
facilitate efficient watermarking of the image. Examples of
algorithms that can be utilized are described in detail infra.
[0026] After processing by the parity check component 130, groups
of data bits formed by the parity check component 130 and their
respective parity values can be provided to a modification
component 140 to facilitate recording of to-be-written data to the
memory component 110. In one example, the modification component
140 can compare respective information bits of the to-be-written
data to the parity of corresponding groups of bits of the original
data. If it is determined that an information bit in the
to-be-written data and the parity of its corresponding group of
original data bits does not match, a data bit in the group can be
toggled to alter the parity of the group such that a match is
created. In accordance with one aspect, respective groups of data
bits can share bits among each other and/or otherwise overlap. As a
result, the parity of multiple groups can be changed with a single
toggling operation, thereby reducing the amount of toggling
operations required for writing data to the memory component 110.
In one example, toggling can be based on one or more techniques as
generally known in the art, such as simple toggling of a single bit
or pair toggling of a master/slave data bit pair.
[0027] It should be appreciated that while FIG. 1 and the above
description relate to techniques for writing data to a memory
component 110, various techniques described herein can be applied
to any computing and/or other application where data is desirably
modified from a first state to a second state. Thus, for example,
various aspects described herein can additionally be applied to
applications such as disk storage management or communication
management. The techniques described herein can be applied to data
communication by, for example, buffering and/or maintaining a
previously transmitted set of information. For a subsequent
communication(s), instead of requiring communication of an entire
set of information, one or more toggling positions can instead be
transmitted to enable a device receiving the communication to
obtain a set of information by toggling at indicated positions in a
previously received communication. By transmitting toggling
locations rather than the corresponding data itself, communicated
information can effectively be compressed prior to transmission,
thereby allowing a communication system to make more effective use
of network bandwidth and/or other system resources.
[0028] Referring now to FIG. 2, a block diagram of a system 200
that implements an example technique for writing binary information
210 in a memory component 240 is provided. In one example, binary
information 210 can be written to the memory component 240 such
that it replaces original information 220 pre-recorded in the
memory component 240. Further, in accordance with one aspect, the
memory component 240 can implement a Champagne Pyramid Parity Check
(CPPC) algorithm for writing binary information 210 into the memory
component 240 by utilizing subcomponents 244-252.
[0029] In accordance with one aspect, the CPPC algorithm can be
utilized by the memory component 240 to reduce power consumption
associated with writing binary data by reducing the total number of
bits of original information 220 that are required to be changed
(e.g., toggled) during the data writing process. In one example,
the CPPC algorithm can be used in combination with one or more
conventional data processing algorithms to write binary information
210 into the memory component 240. By doing so, the CPPC algorithm
can be utilized to improve upon such conventional techniques by,
for example, reducing power consumption required by such
conventional techniques.
[0030] In one example, the memory component 240 can extend the
functionality of traditional data processing and/or storage
techniques by borrowing the functionality of various elements of
such techniques and applying additional elements to improve their
performance. By way of specific example, the memory component 240
can classify binary values stored at various locations therein into
groups. For example, locations having a binary value of "0" can be
classified into a first group, while locations having a binary
value of "1" can be classified into a second group. A comparison
component 248 and/or a toggling component 252 can be utilized to
force original information 220 stored by the memory component 240
to match corresponding information bits of the binary information
210 to be written to the memory component 240.
[0031] In accordance with one aspect, the memory component 240 can
utilize a CPPC algorithm for improved memory writing functionality
as follows. First, a champagne pyramid component 244 can be
utilized to organize storage locations at the memory component 240
into one or more champagne pyramid structures as described in more
detail infra. Following processing by the champagne pyramid
component 244, a parity calculation component 246 can be used to
identify an array of information bits corresponding to the original
information 220 with the same size as that of the binary
information 210 to be written. After the parity calculation
component 246 identifies such an array, a flavor adding
optimization component 250 can then be utilized to find a minimum
number of locations that must be toggled to cause the identified
array of information bits to match the binary information 210. Data
writing can then be processed via the toggling component 252 on the
locations marked by the flavor adding optimization component 250.
By way of example, the operation of the champagne pyramid component
244, the parity calculation component 246, and/or the flavor adding
optimization component 250 can proceed as set forth in the
following description.
[0032] In accordance with another aspect, the Champagne Pyramid
Parity Check algorithm usable by the memory component 240 is so
named because it leverages various properties that can be observed
in a champagne pyramid. For example, a two-dimensional 5-level
champagne pyramid 300 built by 15 wine glasses numbered 1 through
15 is illustrated by FIG. 3. It can be appreciated that, if
champagne is poured into the highest glass in the pyramid 300, the
champagne will fill all glasses below the highest glass as it
overflows down the pyramid 300. Similarly, in a more complicated
scenario, flavorless champagne can be poured into glass 1 at the
top of the pyramid 300 while apple-flavored champagne can be poured
into glass 4. It can be appreciated that, as time elapses while
pouring continues, all of the glasses in the pyramid 300 will come
to be filled with champagne. However, considering the glasses at
the bottom row of the pyramid 300, it can be appreciated that
glasses 11 through 13 would contain apple-flavored champagne at
such a time while glasses 14 and 15 would not.
[0033] Based on these observations, the rows of the pyramid 300 can
be numbered from the bottom. As illustrated in FIG. 3, glasses 11
through 15 are on the first row and glass 1 is on the fifth row.
Thus, for a pyramid 300 with L levels, N successive glasses on the
bottom row of the pyramid 300 will contain the same flavor of
champagne as a glass on the N-th row of the pyramid 300 if
champagne is poured into said glass until it has run off into the
bottom row of the pyramid 300. As a result, it can be appreciated
that if it is desired to add a common flavor into N successive
glasses on the bottom row, the desired flavor can be poured into a
single glass on the N-th row of the pyramid 300 instead of adding
the flavor on the bottom row for all N glasses.
[0034] Similarly, the champagne pyramid component 244 can arrange
information of a binary format into a structure that exhibits the
above properties. In one example, the champagne pyramid component
244 can represent a set of original information 220 and its
corresponding binary values as a binary pyramid structure, such as
the example structure illustrated by diagram 400 in FIG. 4. Each of
these binary values can then be treated as a wine glass in a
pyramid structure having a predefined scanning order. As diagram
400 further illustrates, the number of information bits that can be
held by a champagne pyramid structure is equal to the number of
elements on the bottom row of the pyramid. Thus, as the number of
memory bits (e.g., M) in original information 920 is limited and
the size of a binary information 910 to be written (e.g., L) are
fixed, the number of memory bits of the original information 920
may not be sufficient to build a single L-level champagne pyramid.
In such cases, multiple N-level pyramids can be built instead. In
one example, N can be constrained by the following equation:
N ( N + 1 ) 2 .ltoreq. M N .ltoreq. 2 M L - 1. ( 1 )
##EQU00001##
[0035] In accordance with one aspect, writing binary information
210 such that it replaces original information 220 using CPPC can
be dependent on the bottom row of the pyramid structure(s)
constructed by the champagne pyramid component 244. As described
above with respect to diagram 300, data flow through the structure
can be visualized as liquid poured from the top of a champagne
pyramid such that eventually all glasses on the bottom row of the
pyramid are filled up. Thus, the parity calculation component 246
can begin processing of a pyramid structure(s) by defining the
Region of Interest of a wine glass n, ROI(n), as the set of glasses
that belong to the possible paths from the top glass of the pyramid
to glass n. Thus, for pyramid 300, ROI(11)={1, 2, 4, 7, 11} and
ROI(13)={1, 2, 3, 4, 5, 6, 8, 9, 13}.
[0036] As noted previously, each wine glass in pyramid 300 contains
a value of either "0" or "1." Based on this, the parity calculation
component 246 can count the number of glasses containing "1" in the
region of interest of each glass on the bottom row of the pyramid.
In one example, if the number of such glasses is an even number for
a glass, the parity of that glass is set to "0." Similarly, if the
number of such glasses is an odd number for a given glass, the
parity of that glass can instead be set to "1." After this
calculations, an information array (IA) with the same size as the
binary information 910 to be written can be formed. Diagram 400 in
FIG. 3 illustrates example parity calculations.
[0037] In one example, for each node on the bottom row of a pyramid
structure, the parity calculation component 246 can define the ROI
and then count the number of nodes containing a value of "1" in the
defined region. By way of specific, non-limiting example, the
computational complexity of such operations can be reduced by
making use of various properties of the ROI. First, for each node x
on the bottom row of a pyramid structure, the parity calculation
component can define the "left source" of x, LS(x), the "right
source" of x, RS(x), and the "upper source" of x, US(x), as
illustrated in diagram 500 in FIG. 5. It should be appreciated that
for some cases, LS(x), RS(x), and/or US(x) may not exist.
[0038] The ROI of a non-existing node can be represented as an
empty set. Otherwise, for a node x, ROI can be calculated as
follows:
ROI ( US ( x ) ) = ROI ( LS ( x ) ) ROI ( RS ( x ) ) , ( 2 ) ROI (
x ) = ROI ( LS ( x ) ) ROI ( RS ( x ) ) + { x } = ROI ( LS ( x ) )
+ ROI ( RS ( x ) ) - ROI ( US ( x ) ) + { x } . ( 3 )
##EQU00002##
Based on the ROI of a node x as calculated in Equations (2)-(3),
the parity of a node x can be calculated as follows:
Parity(x)=Parity(LS(x)).sym.Parity(RS(x)).sym.Parity(US(x)).sym.x.
(4)
Thus, from Equation (4), the parity calculation component 246 can
utilize a smart parity calculation method as follows. The parity
calculation component 246 can start from the top of the pyramid and
process each node in increasing order of glass number, as shown in
diagram 300. If US(x) is "0" for a given node, no further operation
is needed. Otherwise, the values contained by of LS(x), RS(x), and
x can be toggled as illustrated by diagrams 610 and 620 in FIG.
6.
[0039] Upon generation of an IA at the parity calculation component
246, the IA can be processed with an array corresponding to the
binary information 210 being written using an exclusive-OR (XOR)
operator and/or another appropriate operator at the comparison
component 248. For each value of "1" that appears in the resulting
array, the corresponding location on the bottom row of the pyramid
structure can be marked as "To Be Flavored" (TBF), as illustrated
in diagram 710 in FIG. 7. Theoretically, it can be appreciated that
a node on the bottom row of the pyramid will be marked as TBF with
a probability of 0.5. Toggling optimization can then be performed
by the flavor adding optimization component 250 by employing the
observation that adding flavor into N successive wine glasses on
the bottom row of a champagne pyramid is equivalent to adding
flavor into one wine glass on the N-th row. Thus, if N successive
nodes in a pyramid structure are marked as TBF, the flavor adding
optimization can designate only one location in the original
information 220 for toggling instead of requiring toggling of all N
locations.
[0040] An example of the flavor adding optimization used in
connection with data processing is illustrated by diagrams 710 and
720 in FIG. 7. As FIG. 7 illustrates, given the pyramid structure
400 illustrated in FIG. 4 and a 5-bit set of binary information
{11000} to be written, toggling of the IA can be performed to
obtain a resulting array of {01110} As illustrated by diagram 710,
the second, third, and fourth nodes on the bottom row can then be
marked as TBF. Instead of toggling all three glasses, however, the
memory component 240 can fully record the binary information 210 by
toggling only one glass on the third row, as indicated by FIG.
7.
[0041] Referring next to FIG. 8, an additional system 800 that can
be utilized for low-power recording of binary information 820 in
accordance with various aspects described herein is illustrated. As
illustrated by FIG. 8, binary information 810 can be written into a
memory component 840 that stores previously-recorded original
information 820. In accordance with one aspect, the memory
component 840 can implement a TBPC (Tree Based Parity Check)
algorithm for recording binary information 810 by utilizing
subcomponents 844-854.
[0042] In accordance with one aspect, a TBPC algorithm can be
utilized by the memory component 840 in conjunction with one or
more existing data processing techniques. In one example, the
memory component 840 can utilize a comparison component 848 for
comparing binary values of original information 820 with
corresponding to-be-written binary information 810, and/or a
modification location determination component 852 for deciding
which bit(s) need modifications to hold to-be-written binary data
810. Modification can then be performed based on the comparisons
via a modification component 854. For every single to-be-written
bit, it can be observed that the probability that the original
information 820 will require modification is 0.5. Accordingly, the
memory component 840 can utilize the TBPC algorithm to attempt to
reduce the probability of modifying the original information
820.
[0043] In accordance with one aspect, the memory component 840 can
utilize the TBPC algorithm to achieve an improvement in power
consumption by reducing the number of to-be-written bits (e.g.,
nodes) that require modification operations. In one example, the
memory component 840 can employ a tree component 844, a parity
calculation component 846, and/or a fountain investigation
component 850 to extend the functionality of a traditional data
processing scheme by way of the TBPC algorithm. In general, the
TBPC algorithm leverages relationships among ancestors and
descendants in an N-ary tree to improve data writing functionality.
By way of example, the operation of the tree component 844, parity
calculation component 846, and/or fountain investigation component
850 is set forth in further detail in the following description. As
used herein, the size of binary information 810 to be written is
represented as L.
[0044] In conventional data processing algorithms, the values of
bit locations (nodes) can be classified as either "0" or "1." Thus,
16 bits of information traditionally require 16 nodes to be
represented. Immediately after classification, these values are
then compared with corresponding bits of the binary information
810. If respective values are the same as a to-be-written bit, no
further operations are performed. Otherwise, one or more processes
are carried out to toggle the value.
[0045] In contrast, by way of example, the tree component 844 can
operate as follows. First, the tree component 844 can populate an
N-ary complete tree, herein referred to as a "master tree." It can
be appreciated that because the master tree is an N-ary complete
tree, every node of the master tree, except for leaf nodes, can be
configured to have N child nodes. In one example, one leaf node can
be utilized to hold one information bit, which can correspond to
the parity value of the node. Accordingly, it can be appreciated
that to write binary information 810 of L bits, a master tree can
be required to have L leaves. FIG. 9 illustrates an example master
tree 900 that can be created by the tree component 844 for N=2 and
L=16. As illustrated by FIG. 9, 31 nodes are utilized to represent
16 bits of information.
[0046] Based on a master tree constructed by the tree component
844, a parity calculation component 846 can be utilized to
determine the information bits represented by each leaf node in the
master tree. In one example, this can be accomplished for a leaf
node by traveling from the leaf node to the root node of the master
tree. If the number of occurrences of "1" values is an odd number,
the information bit of the leaf node can be regarded as "1."
Otherwise, the information bit can be regarded as "0." These
calculations are further illustrated in FIG. 9 for master tree 900.
As shown, the information array (IA) represents the information
written in the master tree being listed under the tree.
[0047] Next, by performing bitwise logical exclusive-OR (XOR)
operations between respective bits of the binary information 810
and the information carried by the master tree (the original
information), the comparison component 848 can obtain a toggle
array. As an example of this comparison, it can be observed that
for the example master tree 900, the obtained information array is
{1110110101111000}. Assuming a binary information array of
{0010001001011110}, a resultant toggle array obtained by the
comparison component 848 becomes {1100111100100110}. This
comparison is illustrated by diagram 1000 in FIG. 10.
[0048] In accordance with one aspect, respective values of "1" in
the toggle array can represent the fact that the corresponding
nodes of the master tree representing the original information 820
require toggling. However, it can be appreciated that power
consumption is introduced by any single modification to the
original information 820. Accordingly, to reduce power consumption,
TBPC can be utilized to minimize the number of "1" values in the
toggle array. Referring back to FIG. 9, when the master tree 900 is
carefully examined, it can be observed that a single change in any
node, either from "1" to "0" or from "0" to "1," can result in a
change in the parity of all of the descendants of the changed node.
Thus, instead of changing the values of N sibling nodes in the
master tree 900, a single change in their common parent node can
give the same effect.
[0049] To leverage the above observation, a fountain investigation
component 850 can build a toggle tree with the same size as the
master tree. In one example, the leaf nodes of the toggle tree can
be populated by the elements of the toggle array in the order that
the elements appear in the toggle array. The remaining nodes in the
tree can be initially populated using any suitable value. In one
example, the fountain investigation component can then begin
analysis at the root of the toggle tree as follows. For a given
node, if all N of the child nodes of the given node contain the
value "1," the child nodes are updated with the value "0" and the
node being examined can be set to "1." Otherwise, the node being
processed can be reset to "0." An example toggle tree generation
and fountain investigation process for the example toggle array
given above is illustrated in diagram 1000 in FIG. 10.
[0050] In one example, to write binary information 810 into the
memory component 840, to-be-toggled sites in the master tree
corresponding to locations in the toggle tree that have respective
values of "1" can then be toggled. After parity calculation for the
master tree, the obtained information array should match the binary
information array corresponding to the binary information 810. An
example data processing process is illustrated by diagram 1100 in
FIG. 11.
[0051] In accordance with one aspect, the maximum achievable
payload of the memory component 840 can be found as follows.
Initially, it can be appreciated that because the number of nodes
in original information 820 is limited, the size of a master tree
that can be formed in the tree formation process is also limited.
Further, as noted above, the master tree is required to have L leaf
nodes to represent L binary information bits. Thus, to form an
N-ary complete tree with L leaf nodes, the total number of nodes
required, nNodes, can be found by the following equation:
nNodes = 1 + N + N 2 + + N x , where x = log N ( L ) = i = 0 x N i
= NL - 1 N - 1 .BECAUSE. N x = L .apprxeq. ( N N - 1 ) L .BECAUSE.
NL 1. ( 5 ) ( 6 ) ##EQU00003##
[0052] From Equation (6), it can be observed that as N increases,
the number of nodes required in the master tree decreases. Thus,
because the number of nodes is limited, the payload that can be
written by the memory component 840 increases as N increases. For a
fixed binary information size L and available number of nodes M,
the minimum N that can be used can be found as follows:
M .gtoreq. NL - 1 N - 1 N .gtoreq. M - 1 M - L ( 7 )
##EQU00004##
[0053] In accordance with another aspect, the parity calculation
component 846 can operate using a low-complexity parity calculation
technique as follows. First, parity(x) can be defined as the parity
of the number of occurrences of "1" in the path from a node x to
the root node of a master tree. It can be observed that parity(x)
is equal to the value of parity(parent(x)) plus the value of node x
using bitwise addition. Thus, the parity calculation component 846
can traverse the master tree starting from the root of the tree
and, for each node x being processed, update the parity of its
child node y by adding the parity of node x and the value of node
y.
[0054] Referring now to FIGS. 12-14, methodologies that can be
implemented in accordance with various aspects described herein are
illustrated. While, for purposes of simplicity of explanation, the
methodologies are shown and described as a series of blocks, it is
to be understood and appreciated that the claimed subject matter is
not limited by the order of the blocks, as some blocks may, in
accordance with the claimed subject matter, occur in different
orders and/or concurrently with other blocks from that shown and
described herein. Moreover, not all illustrated blocks may be
required to implement the methodologies in accordance with the
claimed subject matter.
[0055] Furthermore, the claimed subject matter may be described in
the general context of computer-executable instructions, such as
program modules, executed by one or more components. Generally,
program modules include routines, programs, objects, data
structures, etc., that perform particular tasks or implement
particular abstract data types. Typically the functionality of the
program modules may be combined or distributed as desired in
various embodiments. Furthermore, as will be appreciated various
portions of the disclosed systems above and methods below may
include or consist of artificial intelligence or knowledge or rule
based components, sub-components, processes, means, methodologies,
or mechanisms (e.g., support vector machines, neural networks,
expert systems, Bayesian belief networks, fuzzy logic, data fusion
engines, classifiers . . . ). Such components, inter alia, can
automate certain mechanisms or processes performed thereby to make
portions of the systems and methods more adaptive as well as
efficient and intelligent.
[0056] FIG. 12 illustrates a method of low-power recording of
binary information in accordance with various aspects described
herein. At 1202, binary information to be stored by a memory (e.g.,
a memory component 110) is identified. At 1204, a hierarchical data
structure is generated (e.g., by a parity check component 130) that
comprises a plurality of nodes corresponding to respective
information bits currently stored by the memory. At 1206,
respective parity values of nodes in a bottom row of the
hierarchical data structure generated at 1204 are determined. At
1208, information bits are toggled (e.g., by a modification
component 140) corresponding to one or more nodes in the
hierarchical data structure generated at 1204 such that the
respective parity values of the nodes in the bottom row of the
hierarchical data structure as determined at 1206 match respective
corresponding bits in the binary information identified at 1202 to
be stored by the memory.
[0057] FIG. 13 illustrates a method 1300 of writing binary
information (e.g., binary information 210) to a memory (e.g., a
memory component 240) to replace original information (e.g.,
original information 220) previously stored in the memory. In
accordance with one aspect, method 1300 can be utilized to
implement a Champagne Pyramid Parity Check (CPPC) algorithm. At
1302, binary information to be written in the memory is
provided.
[0058] At 1304, at least one binary pyramid structure having nodes
representing original information is provided (e.g., by a champagne
pyramid component 244). In one example, the number of nodes at the
bottom row of the at least one pyramid structure provided at 1304
can equal the number of bits in the binary information provided at
1302. At 1306, for each node at the bottom row of the at least one
pyramid provided at 1304, parity is calculated (e.g., by a parity
calculation component 946) for the combined set of the respective
nodes and all other nodes in the at least one pyramid for having
the respective node as a direct or indirect successor.
[0059] At 1308, the parity of respective nodes at the bottom row of
the at least one pyramid provided at 1304, as calculated at 1306,
is compared to corresponding bits of the binary information
provided at 1302 (e.g., by a comparison component 248). At 1310,
groups of successive nodes for which the comparison at 1308
indicates have respective parity values that differ from
corresponding bits of the binary information provided at 1302 are
identified (e.g., by a flavor adding optimization component 250).
At 1312, the binary information provided at 1302 is written in the
memory at least in part by toggling nodes (e.g., using a toggling
component 252) corresponding to the respective lowest common
predecessor nodes for the groups identified at 1312.
[0060] Turning to FIG. 14, another method 1400 of writing binary
information (e.g., binary information 810) into a memory (e.g., a
memory component 840) is illustrated. In accordance with one
aspect, method 1400 can be utilized to implement a Tree-Based
Parity Check (TBPC) algorithm. At 1402, binary information to be
written in memory is provided.
[0061] At 1404, one or more N-ary master trees having nodes
representing original information are provided (e.g., by a tree
component 844). In one example, the number of leaf nodes at the
bottom row in the one or more trees can be made equal to the number
of bits in the binary information provided at 1402. At 1406, for
each leaf node in the master trees provided at 1404, parity is
calculated (e.g., by a parity calculation component 846) of nodes
in a path from the leaf node to the root node of the master tree to
which the leaf node belongs. At 1408, a toggle array is obtained
(e.g., by a comparison component 848) by performing exclusive-OR
operations between respective parity values calculated at 1406 and
corresponding bits of the binary information identified at 1402. At
1410, one or more toggle trees are constructed that correspond to
the one or more master trees constructed at 1404 using the toggle
array obtained at 1408 as leaf nodes. Finally, at 1412, the binary
information provided at 1402 is written into the memory at least in
part by toggling nodes in the one or more trees created at 1404
(e.g., via a modification component 854) corresponding to the
respective highest nodes in the toggle trees constructed at 1410
for which all leaf nodes provide a toggling indication (e.g., as
determined by a fountain investigation component 850 and/or a
modification location determination component 852).
[0062] In order to provide additional context for various aspects
described herein, FIG. 15 and the following discussion are intended
to provide a brief, general description of a suitable computing
environment 1500 in which various aspects of the claimed subject
matter can be implemented. Additionally, while the above features
have been described above in the general context of
computer-executable instructions that may run on one or more
computers, those skilled in the art will recognize that said
features can also be implemented in combination with other program
modules and/or as a combination of hardware and software.
[0063] Generally, program modules include routines, programs,
components, data structures, etc., that perform particular tasks or
implement particular abstract data types. Moreover, those skilled
in the art will appreciate that the claimed subject matter can be
practiced with other computer system configurations, including
single-processor or multiprocessor computer systems, minicomputers,
mainframe computers, as well as personal computers, hand-held
computing devices, microprocessor-based or programmable consumer
electronics, and the like, each of which can be operatively coupled
to one or more associated devices.
[0064] The illustrated aspects may also be practiced in distributed
computing environments where certain tasks are performed by remote
processing devices that are linked through a communications
network. In a distributed computing environment, program modules
can be located in both local and remote memory storage devices.
[0065] A computer typically includes a variety of computer-readable
media. Computer-readable media can be any available media that can
be accessed by the computer and includes both volatile and
nonvolatile media, removable and non-removable media. By way of
example, and not limitation, computer-readable media can comprise
computer storage media and communication media. Computer storage
media can include both volatile and nonvolatile, removable and
non-removable media implemented in any method or technology for
storage of information such as computer-readable instructions, data
structures, program modules or other data. Computer storage media
includes, but is not limited to, RAM, ROM, EEPROM, flash memory or
other memory technology, CD-ROM, digital versatile disk (DVD) or
other optical disk storage, magnetic cassettes, magnetic tape,
magnetic disk storage or other magnetic storage devices, or any
other medium which can be used to store the desired information and
which can be accessed by the computer.
[0066] Communication media typically embodies computer-readable
instructions, data structures, program modules or other data in a
modulated data signal such as a carrier wave or other transport
mechanism, and includes any information delivery media. The term
"modulated data signal" means a signal that has one or more of its
characteristics set or changed in such a manner as to encode
information in the signal. By way of example, and not limitation,
communication media includes wired media such as a wired network or
direct-wired connection, and wireless media such as acoustic, RF,
infrared and other wireless media. Combinations of the any of the
above should also be included within the scope of computer-readable
media.
[0067] With reference again to FIG. 15, an exemplary environment
1500 for implementing various aspects described herein includes a
computer 1502, the computer 1502 including a processing unit 1504,
a system memory 1506 and a system bus 1508. The system bus 1508
couples to system components including, but not limited to, the
system memory 1506 to the processing unit 1504. The processing unit
1504 can be any of various commercially available processors. Dual
microprocessors and other multi-processor architectures may also be
employed as the processing unit 1504.
[0068] The system bus 1508 can be any of several types of bus
structure that may further interconnect to a memory bus (with or
without a memory controller), a peripheral bus, and a local bus
using any of a variety of commercially available bus architectures.
The system memory 1506 includes read-only memory (ROM) 1510 and
random access memory (RAM) 1512. A basic input/output system (BIOS)
is stored in a non-volatile memory 1510 such as ROM, EPROM, EEPROM,
which BIOS contains the basic routines that help to transfer
information between elements within the computer 1502, such as
during start-up. The RAM 1512 can also include a high-speed RAM
such as static RAM for caching data.
[0069] The computer 1502 further includes an internal hard disk
drive (HDD) 1514 (e.g., EIDE, SATA), which internal hard disk drive
1514 may also be configured for external use in a suitable chassis
(not shown), a magnetic floppy disk drive (FDD) 1516, (e.g., to
read from or write to a removable diskette 1518) and an optical
disk drive 1520, (e.g., reading a CD-ROM disk 1522 or, to read from
or write to other high capacity optical media such as the DVD). The
hard disk drive 1514, magnetic disk drive 1516 and optical disk
drive 1520 can be connected to the system bus 1508 by a hard disk
drive interface 1524, a magnetic disk drive interface 1526 and an
optical drive interface 1528, respectively. The interface 1524 for
external drive implementations includes at least one or both of
Universal Serial Bus (USB) and IEEE-1394 interface technologies.
Other external drive connection technologies are within
contemplation of the subject disclosure.
[0070] The drives and their associated computer-readable media
provide nonvolatile storage of data, data structures,
computer-executable instructions, and so forth. For the computer
1502, the drives and media accommodate the storage of any data in a
suitable digital format. Although the description of
computer-readable media above refers to a HDD, a removable magnetic
diskette, and a removable optical media such as a CD or DVD, it
should be appreciated by those skilled in the art that other types
of media which are readable by a computer, such as zip drives,
magnetic cassettes, flash memory cards, cartridges, and the like,
may also be used in the exemplary operating environment, and
further, that any such media may contain computer-executable
instructions for performing the methods described herein.
[0071] A number of program modules can be stored in the drives and
RAM 1512, including an operating system 1530, one or more
application programs 1532, other program modules 1534 and program
data 1536. All or portions of the operating system, applications,
modules, and/or data can also be cached in the RAM 1512. It is
appreciated that the claimed subject matter can be implemented with
various commercially available operating systems or combinations of
operating systems.
[0072] A user can enter commands and information into the computer
1502 through one or more wired/wireless input devices, e.g., a
keyboard 1538 and a pointing device, such as a mouse 1540. Other
input devices (not shown) may include a microphone, an IR remote
control, a joystick, a game pad, a stylus pen, touch screen, or the
like. These and other input devices are often connected to the
processing unit 1504 through an input device interface 1542 that is
coupled to the system bus 1508, but can be connected by other
interfaces, such as a parallel port, a serial port, an IEEE-1394
port, a game port, a USB port, an IR interface, etc.
[0073] A monitor 1544 or other type of display device is also
connected to the system bus 1508 via an interface, such as a video
adapter 1546. In addition to the monitor 1544, a computer typically
includes other peripheral output devices (not shown), such as
speakers, printers, etc.
[0074] The computer 1502 may operate in a networked environment
using logical connections via wired and/or wireless communications
to one or more remote computers, such as a remote computer(s) 1548.
The remote computer(s) 1548 can be a workstation, a server
computer, a router, a personal computer, portable computer,
microprocessor-based entertainment appliance, a peer device or
other common network node, and typically includes many or all of
the elements described relative to the computer 1502, although, for
purposes of brevity, only a memory/storage device 1550 is
illustrated. The logical connections depicted include
wired/wireless connectivity to a local area network (LAN) 1552
and/or larger networks, e.g., a wide area network (WAN) 1554. Such
LAN and WAN networking environments are commonplace in offices and
companies, and facilitate enterprise-wide computer networks, such
as intranets, all of which may connect to a global communications
network, e.g., the Internet.
[0075] When used in a LAN networking environment, the computer 1502
is connected to the local network 1552 through a wired and/or
wireless communication network interface or adapter 1556. The
adapter 1556 may facilitate wired or wireless communication to the
LAN 1552, which may also include a wireless access point disposed
thereon for communicating with the wireless adapter 1556.
[0076] When used in a WAN networking environment, the computer 1502
can include a modem 1558, or is connected to a communications
server on the WAN 1554, or has other means for establishing
communications over the WAN 1554, such as by way of the Internet.
The modem 1558, which can be internal or external and a wired or
wireless device, is connected to the system bus 1508 via the serial
port interface 1542. In a networked environment, program modules
depicted relative to the computer 1502, or portions thereof, can be
stored in the remote memory/storage device 1550. It will be
appreciated that the network connections shown are exemplary and
other means of establishing a communications link between the
computers can be used.
[0077] The computer 1502 is operable to communicate with any
wireless devices or entities operatively disposed in wireless
communication, e.g., a printer, scanner, desktop and/or portable
computer, portable data assistant, communications satellite, any
piece of equipment or location associated with a wirelessly
detectable tag (e.g., a kiosk, news stand, restroom), and
telephone. This includes at least Wi-Fi and Bluetooth.TM. wireless
technologies. Thus, the communication can be a predefined structure
as with a conventional network or simply an ad hoc communication
between at least two devices.
[0078] Wi-Fi, or Wireless Fidelity, is a wireless technology
similar to that used in a cell phone that enables a device to send
and receive data anywhere within the range of a base station. Wi-Fi
networks use IEEE-802.11 (a, b, g, etc.) radio technologies to
provide secure, reliable, and fast wireless connectivity. A Wi-Fi
network can be used to connect computers to each other, to the
Internet, and to wired networks (which use IEEE-802.3 or Ethernet).
Wi-Fi networks operate in the unlicensed 2.4 and 5 GHz radio bands,
at an 13 Mbps (802.11a) or 54 Mbps (802.11b) data rate, for
example, or with products that contain both bands (dual band).
Thus, networks using Wi-Fi wireless technology can provide
real-world performance similar to a 10BaseT wired Ethernet
network.
[0079] Referring now to FIG. 16, there is illustrated a schematic
block diagram of an exemplary computer compilation system operable
to execute the disclosed architecture. The system 1600 includes one
or more client(s) 1602. The client(s) 1602 can be hardware and/or
software (e.g., threads, processes, computing devices). In one
example, the client(s) 1602 can house cookie(s) and/or associated
contextual information by employing one or more features described
herein.
[0080] The system 1600 also includes one or more server(s) 1604.
The server(s) 1604 can also be hardware and/or software (e.g.,
threads, processes, computing devices). In one example, the servers
1604 can house threads to perform transformations by employing one
or more features described herein. One possible communication
between a client 1602 and a server 1604 can be in the form of a
data packet adapted to be transmitted between two or more computer
processes. The data packet may include a cookie and/or associated
contextual information, for example. The system 1600 includes a
communication framework 1606 (e.g., a global communication network
such as the Internet) that can be employed to facilitate
communications between the client(s) 1602 and the server(s)
1604.
[0081] Communications can be facilitated via a wired (including
optical fiber) and/or wireless technology. The client(s) 1602 are
operatively connected to one or more client data store(s) 1608 that
can be employed to store information local to the client(s) 1602
(e.g., cookie(s) and/or associated contextual information).
Similarly, the server(s) 1604 are operatively connected to one or
more server data store(s) 1610 that can be employed to store
information local to the servers 1604.
[0082] The claimed subject matter has been described herein by way
of examples. For the avoidance of doubt, the subject matter
disclosed herein is not limited by such examples. In addition, any
aspect or design described herein as "exemplary" is not necessarily
to be construed as preferred or advantageous over other aspects or
designs, nor is it meant to preclude equivalent exemplary
structures and techniques known to those of ordinary skill in the
art. Furthermore, to the extent that the terms "includes," "has,"
"contains," and other similar words are used in either the detailed
description or the claims, for the avoidance of doubt, such terms
are intended to be inclusive in a manner similar to the term
"comprising" as an open transition word without precluding any
additional or other elements.
[0083] Additionally, the disclosed subject matter can be
implemented as a system, method, apparatus, or article of
manufacture using standard programming and/or engineering
techniques to produce software, firmware, hardware, or any
combination thereof to control a computer or processor based device
to implement aspects detailed herein. The terms "article of
manufacture," "computer program product" or similar terms, where
used herein, are intended to encompass a computer program
accessible from any computer-readable device, carrier, or media.
For example, computer readable media can include but are not
limited to magnetic storage devices (e.g., hard disk, floppy disk,
magnetic strips . . . ), optical disks (e.g., compact disk (CD),
digital versatile disk (DVD) . . . ), smart cards, and flash memory
devices (e.g., card, stick). Additionally, it is known that a
carrier wave can be employed to carry computer-readable electronic
data such as those used in transmitting and receiving electronic
mail or in accessing a network such as the Internet or a local area
network (LAN).
[0084] The aforementioned systems have been described with respect
to interaction between several components. It can be appreciated
that such systems and components can include those components or
specified sub-components, some of the specified components or
sub-components, and/or additional components, according to various
permutations and combinations of the foregoing. Sub-components can
also be implemented as components communicatively coupled to other
components rather than included within parent components, e.g.,
according to a hierarchical arrangement. Additionally, it should be
noted that one or more components can be combined into a single
component providing aggregate functionality or divided into several
separate sub-components, and any one or more middle layers, such as
a management layer, can be provided to communicatively couple to
such sub-components in order to provide integrated functionality.
Any components described herein can also interact with one or more
other components not specifically described herein but generally
known by those of skill in the art.
* * * * *