U.S. patent application number 15/809200 was filed with the patent office on 2018-03-08 for processing images using deep neural networks.
The applicant listed for this patent is Google LLC. Invention is credited to Christian Szegedy, Vincent O. Vanhoucke.
Application Number | 20180068207 15/809200 |
Document ID | / |
Family ID | 54073023 |
Filed Date | 2018-03-08 |
United States Patent
Application |
20180068207 |
Kind Code |
A1 |
Szegedy; Christian ; et
al. |
March 8, 2018 |
PROCESSING IMAGES USING DEEP NEURAL NETWORKS
Abstract
Methods, systems, and apparatus, including computer programs
encoded on computer storage media, for image processing using deep
neural networks. One of the methods includes receiving data
characterizing an input image; processing the data characterizing
the input image using a deep neural network to generate an
alternative representation of the input image, wherein the deep
neural network comprises a plurality of subnetworks, wherein the
subnetworks are arranged in a sequence from lowest to highest, and
wherein processing the data characterizing the input image using
the deep neural network comprises processing the data through each
of the subnetworks in the sequence; and processing the alternative
representation of the input image through an output layer to
generate an output from the input image.
Inventors: |
Szegedy; Christian;
(Mountain View, CA) ; Vanhoucke; Vincent O.; (San
Francisco, CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Google LLC |
Mountain View |
CA |
US |
|
|
Family ID: |
54073023 |
Appl. No.: |
15/809200 |
Filed: |
November 10, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
15649947 |
Jul 14, 2017 |
|
|
|
15809200 |
|
|
|
|
14839452 |
Aug 28, 2015 |
9715642 |
|
|
15649947 |
|
|
|
|
62043865 |
Aug 29, 2014 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
G06N 3/0454 20130101;
G06K 9/66 20130101; G06N 3/063 20130101; G06N 3/084 20130101 |
International
Class: |
G06K 9/66 20060101
G06K009/66; G06N 3/04 20060101 G06N003/04; G06N 3/08 20060101
G06N003/08 |
Claims
1. (canceled)
2. A method comprising: receiving data characterizing an input
image; processing the data characterizing the input image using a
deep neural network to generate an alternative representation of
the input image, wherein the deep neural network comprises a
plurality of subnetworks, wherein the subnetworks are arranged in a
sequence from lowest to highest, and wherein processing the data
characterizing the input image using the deep neural network
comprises processing the data through each of the subnetworks in
the sequence, wherein the plurality of subnetworks comprise a
plurality of module subnetworks, and wherein each of the module
subnetworks is configured to: receive a preceding output
representation generated by a preceding subnetwork in the sequence;
process the preceding output representation through each layer of a
first group of neural network layers to generate a first group
output, the first group comprising a 1.times.1 convolutional layer
followed by a 3.times.3 convolutional layer; and generate an output
representation for the module subnetwork from the first group
output; and processing the alternative representation of the input
image through an output layer to generate an output from the input
image.
3. The method of claim 2, wherein each of the module subnetworks is
further configured to: process the preceding output representation
through a pass-through convolutional layer to generate a
pass-through output; and concatenate the pass-through output and
the first group output.
4. The method of claim 3, wherein the output representation
includes the concatenated pass-through output and first group
output.
5. The method of claim 3, wherein the pass-through convolutional
layer is a 1.times.1 convolutional layer.
6. The method of claim 2, wherein each of the module subnetworks is
further configured to: process the preceding output representation
through each layer of a second group of neural network layers to
generate a second group output, wherein the second group comprises
a third convolutional layer followed by a fourth convolutional
layer.
7. The method of claim 6, wherein the third convolutional layer is
a 1.times.1 convolutional layer.
8. The method of claim 6, wherein the fourth convolutional layer is
a 5.times.5 convolutional layer.
9. The method of claim 6, wherein each of the module subnetworks is
further configured to: process the preceding output representation
through each layer of a third group of neural network layers to
generate a third group output, wherein the third group comprises a
first max-pooling layer followed by a fifth convolutional
layer.
10. The method of claim 9, wherein the first max-pooling layer is a
3.times.3 max pooling layer.
11. The method of claim 9, wherein the fifth convolutional layer is
a 1.times.1 convolutional layer.
12. The method of claim 2, wherein the plurality of subnetworks
comprises one or more additional max-pooling layers.
13. The method of claim 2, wherein the plurality of subnetworks
comprises one or more initial convolutional layers.
14. A system comprising one or more computers and one or more
storage devices storing instructions that when executed by the one
or more computers cause the one or more computers to perform
operations comprising: receiving data characterizing an input
image; processing the data characterizing the input image using a
deep neural network to generate an alternative representation of
the input image, wherein the deep neural network comprises a
plurality of subnetworks, wherein the subnetworks are arranged in a
sequence from lowest to highest, and wherein processing the data
characterizing the input image using the deep neural network
comprises processing the data through each of the subnetworks in
the sequence, wherein the plurality of subnetworks comprise a
plurality of module subnetworks, and wherein each of the module
subnetworks is configured to: receive a preceding output
representation generated by a preceding subnetwork in the sequence;
process the preceding output representation through each layer of a
first group of neural network layers to generate a first group
output, the first group comprising a 1.times.1 convolutional layer
followed by a 3.times.3 convolutional layer; and generate an output
representation for the module subnetwork from the first group
output; and processing the alternative representation of the input
image through an output layer to generate an output from the input
image.
15. The system of claim 14, wherein each of the module subnetworks
is further configured to: process the preceding output
representation through a pass-through convolutional layer to
generate a pass-through output; and concatenate the pass-through
output and the first group output.
16. The system of claim 15, wherein the output representation
includes the concatenated pass-through output and first group
output.
17. The system of claim 15, wherein the pass-through convolutional
layer is a 1.times.1 convolutional layer.
18. The system of claim 14, wherein each of the module subnetworks
is further configured to: process the preceding output
representation through each layer of a second group of neural
network layers to generate a second group output, wherein the
second group comprises a third convolutional layer followed by a
fourth convolutional layer.
19. The method of claim 18, wherein each of the module subnetworks
is further configured to: process the preceding output
representation through each layer of a third group of neural
network layers to generate a third group output, wherein the third
group comprises a first max-pooling layer followed by a fifth
convolutional layer.
20. A computer program product encoded on one or more
non-transitory computer storage media, the computer program product
comprising instructions that when executed by one or more computers
cause the one or more computers to perform operations comprising:
receiving data characterizing an input image; processing the data
characterizing the input image using a deep neural network to
generate an alternative representation of the input image, wherein
the deep neural network comprises a plurality of subnetworks,
wherein the subnetworks are arranged in a sequence from lowest to
highest, and wherein processing the data characterizing the input
image using the deep neural network comprises processing the data
through each of the subnetworks in the sequence, wherein the
plurality of subnetworks comprise a plurality of module
subnetworks, and wherein each of the module subnetworks is
configured to: receive a preceding output representation generated
by a preceding subnetwork in the sequence; process the preceding
output representation through each layer of a first group of neural
network layers to generate a first group output, the first group
comprising a 1.times.1 convolutional layer followed by a 3.times.3
convolutional layer; and generate an output representation for the
module subnetwork from the first group output; and processing the
alternative representation of the input image through an output
layer to generate an output from the input image.
Description
CROSS-REFERENCE TO RELATED APPLICATION
[0001] This application is a continuation of U.S. application Ser.
No. 15/649,947, filed on Jul. 14, 2017, which is a continuation of
U.S. application Ser. No. 14/839,452, filed on Aug. 28, 2015, now
U.S. Pat. No. 9,715,642, which claims priority to U.S. Provisional
Application No. 62/043,865, filed on Aug. 29, 2014. The disclosure
of the prior applications are considered part of and are
incorporated by reference in the disclosure of this
application.
BACKGROUND
[0002] This specification relates to processing images using deep
neural networks, e.g., convolutional neural networks.
[0003] Convolutional neural networks generally include two kinds of
neural network layers, convolutional neural network layers and
fully-connected neural network layers. Convolutional neural network
layers have sparse connectivity, with each node in a convolutional
layer receiving input from only a subset of the nodes in the next
lowest neural network layer. Some convolutional neural network
layers have nodes that share weights with other nodes in the layer.
Nodes in fully-connected layers, however, receive input from each
node in the next lowest neural network layer.
SUMMARY
[0004] In general, this specification describes techniques for
processing images using deep neural networks.
[0005] Particular embodiments of the subject matter described in
this specification can be implemented so as to realize one or more
of the following advantages. By including subnetworks and, in
particular, module subnetworks, in a deep neural network, the deep
neural network can perform better on image processing tasks, e.g.,
object recognition or image classification. Additionally, deep
neural networks that include module subnetworks can be trained
quicker and more efficiently than deep neural networks that do not
include module subnetworks while maintaining improved performance
on the image processing tasks.
[0006] The details of one or more embodiments of the subject matter
of this specification are set forth in the accompanying drawings
and the description below. Other features, aspects, and advantages
of the subject matter will become apparent from the description,
the drawings, and the claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 shows an example image processing system.
[0008] FIG. 2 is a flow diagram of an example process for
generating an output from an input image.
[0009] FIG. 3 is a flow diagram of an example process for
processing an input using a module subnetwork.
[0010] Like reference numbers and designations in the various
drawings indicate like elements.
DETAILED DESCRIPTION
[0011] FIG. 1 shows an example image processing system 100. The
image processing system 100 is an example of a system implemented
as computer programs on one or more computers in one or more
locations, in which the systems, components, and techniques
described below can be implemented.
[0012] The image processing system 100 receives data characterizing
an input image, e.g., pixel information for the input image or
other information characterizing the input image. For example, the
image processing system 100 can receive input image data 102. The
image processing system 100 processes the received data using a
deep neural network 150 and an output layer 152 to generate an
output for the input image, e.g., an output 154 from the input
image data 102.
[0013] The image processing system 100 can be configured to receive
input image data and to generate any kind of score or
classification output based on the input image, i.e., can be
configured to perform any kind of image processing task. The score
or classification output generated by the system depends on the
task that the image processing system has been configured to
confirm. For example, for an image classification or recognition
task, the output generated by the image processing system 100 for a
given image may be scores for each of a set of object categories,
with each score representing the likelihood that the image contains
an image of an object belonging to the category. As another
example, for an object detection task, the output generated by the
image processing system 100 can identify a location, a size, or
both, of an object of interest in the input image.
[0014] The deep neural network 150 includes a sequence of multiple
subnetworks arranged from a lowest subnetwork in the sequence to a
highest subnetwork in the sequence, e.g., the sequence that
includes subnetwork A 104, subnetwork B 106, and subnetwork C 108.
The deep neural network 150 processes received input image data
through each of the subnetworks in the sequence to generate an
alternative representation of the input image. Once the deep neural
network 150 has generated the alternative representation of the
input image, the output layer 152 processes the alternative
representation to generate an output for the input image. As
described above, the type of output generated by the output layer
152 depends on the image classification task the image process
system 100 has been configured to confirm. Similarly, the type of
output layer 152 used to generate the output from the alternative
representation also depends on the task. In particular, the output
layer 152 is an output layer that is appropriate for the task,
i.e., that generates the kind of output that is necessary for the
image processing task. For example, for the image classification
task, the output layer may be a softmax output layer that generates
the respective score for each of the set of object categories.
[0015] The subnetworks in the sequence include multiple module
subnetworks and, optionally, one or more other subnetworks that
each consist of one or more conventional neural network layers,
e.g., max-pooling layers, convolutional layers, fully-connected
layers, regularization layers, and so on.
[0016] In the example of FIG. 1, subnetwork B 106 is depicted as a
module subnetwork. While only a single module subnetwork is shown
in the example of FIG. 1, the deep neural network 150 will
generally include multiple module subnetworks. A module subnetwork
generally includes a pass-through convolutional layer, e.g., the
pass-through convolutional layer 106, one or more groups of neural
network layers, and a concatenation layer, e.g., concatenation
layer 130. The module subnetwork B 106 receives an input from a
preceding subnetwork in the sequence and generates an output
representation from the received input.
[0017] The concatenation layer 130 receives an output generated by
the pass-through convolutional layer 108 and a respective output
generated by each of the groups of neural network layers and
concatenates the received outputs to generate a single output that
is provided as the output of the subnetwork B 106 to the next
module in the sequence of modules or to the output layer 152.
[0018] Each group of neural network layers in a module subnetwork
includes two or more neural network layers, with an initial neural
network layer followed by one or more other neural network layers.
For example, the subnetwork B 106 includes one group that includes
a first convolutional layer 110 followed by a second convolutional
layer 112, another group that includes a convolutional layer 114
followed by a convolutional layer 116, and a third group that
includes a max pooling layer 118 followed by a convolutional layer
120.
[0019] Generally, each node in a fully-connected layer receives an
input from each node in the next lowest layer in the sequence and
produces an activation from the received inputs in accordance with
a set of weights for the node. The activations generated by each
node in a given fully-connected layer are provided as an input to
each node in the next highest fully-connected layer in the sequence
or, if the fully-connected layer is the highest layer in the
sequence, provided to the output layer 152.
[0020] Unlike fully-connected layers, convolutional layers are
generally sparsely-connected neural network layers. That is, each
node in a convolutional layer receives an input from a portion of,
i.e., less than all of, the nodes in the preceding neural network
layer or, if the convolutional layer is the lowest layer in the
sequence, a portion of an input to the image processing system 100,
and produces an activation from the input. Generally, convolutional
layers have nodes that produce an activation by convolving received
inputs in accordance with a set of weights for each node. In some
cases, nodes in a convolutional layer may be configured to share
weights. That is, a portion of the nodes in the layer may be
constrained to always have the same weight values as the other
nodes in the layer.
[0021] Processing an input using a module subnetwork to generate an
output representation is described in more detail below with
reference to FIG. 3.
[0022] FIG. 2 is a flow diagram of an example process 200 for
generating an output from a received input. For convenience, the
process 200 will be described as being performed by a system of one
or more computers located in one or more locations. For example, an
image processing system, e.g., the image processing system 100 of
FIG. 1, appropriately programmed in accordance with this
specification, can perform the process 200.
[0023] The system receives data characterizing an input image (step
202).
[0024] The system processes the data using a deep neural network
that includes subnetworks, e.g., the deep neural network 150 of
FIG. 1, to generate an alternative representation (step 204). The
deep neural network includes a sequence of subnetworks arranged
from a lowest subnetwork in the sequence to a highest subnetwork in
the sequence. The system processes the data through each of the
subnetworks in the sequence to generate the alternative
representation. The subnetworks in the sequence include multiple
module subnetworks and, optionally, one or more subnetworks that
include one or more conventional neural network layers, e.g.,
max-pooling layers, convolutional layers, fully-connected layers,
regularization layers, and so on. Processing an input through a
module subnetwork is described below with reference to FIG. 3.
[0025] The system processes the alternative representation through
an output layer to generate an output for the input image (step
206). Generally, the output generated by the system depends on the
image processing task that the system has been configured to
perform. For example, if the system is configured to perform an
image classification or recognition task, the output generated by
the output layer may be a respective score for each of a
predetermined set of object categories, with the score for a given
object category representing the likelihood that the input image
contains an image of an object that belongs to the object
category.
[0026] FIG. 3 is a flow diagram of an example process 300 for
processing an input using a module subnetwork. For convenience, the
process 300 will be described as being performed by a system of one
or more computers located in one or more locations. For example, an
image processing system, e.g., the image processing system 100 of
FIG. 1, appropriately programmed in accordance with this
specification, can perform the process 300.
[0027] The system receives an input (step 302). In particular, the
input is a preceding output representation, i.e., an output
representation generated by a preceding subnetwork in the sequence
of subnetworks.
[0028] The system processes the preceding output representation
through a pass-through convolutional layer to generate a
pass-through output (step 304). In some implementations, the
pass-through convolutional layer is a 1.times.1 convolutional
layer. Generally, a k.times.k convolutional layer is a
convolutional layer that uses a k.times.k filter. That is,
k.times.k represents the size of the patch in the preceding layer
that the convolutional layer is connected to. In these
implementations, the 1.times.1 pass-through convolutional layer is
generally used as a dimension reduction module to reduce the
dimension of the preceding output representation and remove
computational bottlenecks that may otherwise limit the size of the
deep neural network. In other implementations, the pass-through
convolutional layers can use different sized filters, e.g., a
3.times.3 convolutional layer or a 5.times.5 convolutional
layer.
[0029] The system processes the preceding output representation
through one or more groups of neural network layers (step 306).
Each group of neural network layers includes an initial neural
network layer followed by one or more additional neural network
layers. The system processes the preceding output representation
through a given group by processing the preceding output
representation through each of the neural network layers in the
group to generate a group output for the group.
[0030] In some implementations, one or more of the groups includes
one convolutional layer followed by another convolutional layer.
For example, one group may include a 1.times.1 convolutional layer
followed by a 3.times.3 convolutional layer. As another example,
another group may include a 1.times.1 convolutional layer followed
by a 5.times.5 convolutional layer. As described above, the
1.times.1 convolutional layers can be used as a dimension reduction
module to reduce the dimension of the preceding output
representation before it is processed by the other convolutional
layer that follows the 1.times.1 convolutional layer. Other
combinations of convolutional layer sizes are possible,
however.
[0031] In some implementations, one or more of the groups includes
a max-pooling layer followed by a convolutional layer. For example,
the max-pooling layer may be a 3.times.3 max-pooling layer followed
by a 1.times.1 convolutional layer. Other combinations of
max-pooling layer sizes and convolutional layer sizes are possible,
however.
[0032] The system concatenates the pass-through output with the
group outputs to generate an output representation (step 308). For
example, the system can concatenate vectors generated by the
pass-through convolutional layer and the groups to generate a
single vector, i.e., the output representation. The system can then
provide the output representation as an input to the next
subnetwork in the sequence or to the output layer of the
system.
[0033] The processes 200 and 300 can be performed to generate
classification data for images for which the desired
classification, i.e., the output that should be generated by the
system for the image, is not known. The processes 200 and 300 can
also be performed on documents in a set of training images, i.e., a
set of images for which the output that should be predicted by the
system is known, in order to train the deep neural network, i.e.,
to determine trained values for the parameters of the layers in the
deep neural network, i.e., of the layers in the module subnetworks
and the other subnetworks. In particular, the processes 200 and 300
can be performed repeatedly on images selected from a set of
training images as part of a backpropagation training technique
that determines trained values for the parameters of the layers of
the deep neural network.
[0034] In some implementations, during training, the deep neural
network is augmented with one or more other training subnetworks
that are removed after the deep neural network has been trained.
Each other training subnetwork (also referred to as a "side tower")
includes one or more conventional neural network layers, e.g., can
include one or more of average pooling layers, fully connected
layers, dropout layers, and so on, and an output layer that is
configured to generate the same classifications as the output layer
of the system. Each other training subnetwork is configured to
receive the output generated by one of the subnetworks of the deep
neural network, i.e., in parallel with the subnetwork that already
receives the subnetwork output, and process the subnetwork output
to generate a training subnetwork output for the training image.
The training subnetwork output is also used to adjust values for
the parameters of the layers in the deep neural network as part of
the backpropagation training technique. As described above, once
the deep neural network has been trained, the training subnetworks
are removed.
[0035] Embodiments of the subject matter and the functional
operations described in this specification can be implemented in
digital electronic circuitry, in tangibly-embodied computer
software or firmware, in computer hardware, including the
structures disclosed in this specification and their structural
equivalents, or in combinations of one or more of them. Embodiments
of the subject matter described in this specification can be
implemented as one or more computer programs, i.e., one or more
modules of computer program instructions encoded on a tangible non
transitory program carrier for execution by, or to control the
operation of, data processing apparatus. Alternatively or in
addition, the program instructions can be encoded on an
artificially generated propagated signal, e.g., a machine-generated
electrical, optical, or electromagnetic signal, that is generated
to encode information for transmission to suitable receiver
apparatus for execution by a data processing apparatus. The
computer storage medium can be a machine-readable storage device, a
machine-readable storage substrate, a random or serial access
memory device, or a combination of one or more of them.
[0036] The term "data processing apparatus" encompasses all kinds
of apparatus, devices, and machines for processing data, including
by way of example a programmable processor, a computer, or multiple
processors or computers. The apparatus can include special purpose
logic circuitry, e.g., an FPGA (field programmable gate array) or
an ASIC (application specific integrated circuit). The apparatus
can also include, in addition to hardware, code that creates an
execution environment for the computer program in question, e.g.,
code that constitutes processor firmware, a protocol stack, a
database management system, an operating system, or a combination
of one or more of them.
[0037] A computer program (which may also be referred to or
described as a program, software, a software application, a module,
a software module, a script, or code) can be written in any form of
programming language, including compiled or interpreted languages,
or declarative or procedural languages, and it can be deployed in
any form, including as a standalone program or as a module,
component, subroutine, or other unit suitable for use in a
computing environment. A computer program may, but need not,
correspond to a file in a file system. A program can be stored in a
portion of a file that holds other programs or data, e.g., one or
more scripts stored in a markup language document, in a single file
dedicated to the program in question, or in multiple coordinated
files, e.g., files that store one or more modules, sub programs, or
portions of code. A computer program can be deployed to be executed
on one computer or on multiple computers that are located at one
site or distributed across multiple sites and interconnected by a
communication network.
[0038] The processes and logic flows described in this
specification can be performed by one or more programmable
computers executing one or more computer programs to perform
functions by operating on input data and generating output. The
processes and logic flows can also be performed by, and apparatus
can also be implemented as, special purpose logic circuitry, e.g.,
an FPGA (field programmable gate array) or an ASIC (application
specific integrated circuit).
[0039] Computers suitable for the execution of a computer program
include, by way of example, can be based on general or special
purpose microprocessors or both, or any other kind of central
processing unit. Generally, a central processing unit will receive
instructions and data from a read only memory or a random access
memory or both. The essential elements of a computer are a central
processing unit for performing or executing instructions and one or
more memory devices for storing instructions and data. Generally, a
computer will also include, or be operatively coupled to receive
data from or transfer data to, or both, one or more mass storage
devices for storing data, e.g., magnetic, magneto optical disks, or
optical disks. However, a computer need not have such devices.
Moreover, a computer can be embedded in another device, e.g., a
mobile telephone, a personal digital assistant (PDA), a mobile
audio or video player, a game console, a Global Positioning System
(GPS) receiver, or a portable storage device, e.g., a universal
serial bus (USB) flash drive, to name just a few.
[0040] Computer readable media suitable for storing computer
program instructions and data include all forms of nonvolatile
memory, media and memory devices, including by way of example
semiconductor memory devices, e.g., EPROM, EEPROM, and flash memory
devices; magnetic disks, e.g., internal hard disks or removable
disks; magneto optical disks; and CD ROM and DVD-ROM disks. The
processor and the memory can be supplemented by, or incorporated
in, special purpose logic circuitry.
[0041] To provide for interaction with a user, embodiments of the
subject matter described in this specification can be implemented
on a computer having a display device, e.g., a CRT (cathode ray
tube) or LCD (liquid crystal display) monitor, for displaying
information to the user and a keyboard and a pointing device, e.g.,
a mouse or a trackball, by which the user can provide input to the
computer. Other kinds of devices can be used to provide for
interaction with a user as well; for example, feedback provided to
the user can be any form of sensory feedback, e.g., visual
feedback, auditory feedback, or tactile feedback; and input from
the user can be received in any form, including acoustic, speech,
or tactile input. In addition, a computer can interact with a user
by sending documents to and receiving documents from a device that
is used by the user; for example, by sending web pages to a web
browser on a user's client device in response to requests received
from the web browser.
[0042] Embodiments of the subject matter described in this
specification can be implemented in a computing system that
includes a back end component, e.g., as a data server, or that
includes a middleware component, e.g., an application server, or
that includes a front end component, e.g., a client computer having
a graphical user interface or a Web browser through which a user
can interact with an implementation of the subject matter described
in this specification, or any combination of one or more such back
end, middleware, or front end components. The components of the
system can be interconnected by any form or medium of digital data
communication, e.g., a communication network. Examples of
communication networks include a local area network ("LAN") and a
wide area network ("WAN"), e.g., the Internet.
[0043] The computing system can include clients and servers. A
client and server are generally remote from each other and
typically interact through a communication network. The
relationship of client and server arises by virtue of computer
programs running on the respective computers and having a
client-server relationship to each other.
[0044] While this specification contains many specific
implementation details, these should not be construed as
limitations on the scope of any invention or of what may be
claimed, but rather as descriptions of features that may be
specific to particular embodiments of particular inventions.
Certain features that are described in this specification in the
context of separate embodiments can also be implemented in
combination in a single embodiment. Conversely, various features
that are described in the context of a single embodiment can also
be implemented in multiple embodiments separately or in any
suitable subcombination. Moreover, although features may be
described above as acting in certain combinations and even
initially claimed as such, one or more features from a claimed
combination can in some cases be excised from the combination, and
the claimed combination may be directed to a subcombination or
variation of a subcombination.
[0045] Similarly, while operations are depicted in the drawings in
a particular order, this should not be understood as requiring that
such operations be performed in the particular order shown or in
sequential order, or that all illustrated operations be performed,
to achieve desirable results. In certain circumstances,
multitasking and parallel processing may be advantageous. Moreover,
the separation of various system modules and components in the
embodiments described above should not be understood as requiring
such separation in all embodiments, and it should be understood
that the described program components and systems can generally be
integrated together in a single software product or packaged into
multiple software products.
[0046] Particular embodiments of the subject matter have been
described. Other embodiments are within the scope of the following
claims. For example, the actions recited in the claims can be
performed in a different order and still achieve desirable results.
As one example, the processes depicted in the accompanying figures
do not necessarily require the particular order shown, or
sequential order, to achieve desirable results. In certain
implementations, multitasking and parallel processing may be
advantageous.
* * * * *