U.S. patent application number 16/721954 was filed with the patent office on 2020-04-23 for system and method for generating a facial representation.
This patent application is currently assigned to Cortica Ltd.. The applicant listed for this patent is Cortica Ltd.. Invention is credited to Karina Odinaev, Igal Raichelgauz, Yehoshua Y. Zeevi.
Application Number | 20200125837 16/721954 |
Document ID | / |
Family ID | 54065620 |
Filed Date | 2020-04-23 |
![](/patent/app/20200125837/US20200125837A1-20200423-D00000.png)
![](/patent/app/20200125837/US20200125837A1-20200423-D00001.png)
![](/patent/app/20200125837/US20200125837A1-20200423-D00002.png)
![](/patent/app/20200125837/US20200125837A1-20200423-D00003.png)
![](/patent/app/20200125837/US20200125837A1-20200423-D00004.png)
![](/patent/app/20200125837/US20200125837A1-20200423-D00005.png)
![](/patent/app/20200125837/US20200125837A1-20200423-D00006.png)
![](/patent/app/20200125837/US20200125837A1-20200423-D00007.png)
![](/patent/app/20200125837/US20200125837A1-20200423-M00001.png)
United States Patent
Application |
20200125837 |
Kind Code |
A1 |
Raichelgauz; Igal ; et
al. |
April 23, 2020 |
SYSTEM AND METHOD FOR GENERATING A FACIAL REPRESENTATION
Abstract
A system and method for generating a facial representation. The
method includes identifying, via at least one data source, at least
one multimedia content element; generating at least one signature
for at least a portion of each identified multimedia content
element, wherein each generated signature represents at least one
facial concept; analyzing the generated signatures to determine a
cluster of signatures of facial concepts; and generating, based on
the cluster of facial concept signatures, a facial
representation.
Inventors: |
Raichelgauz; Igal; (Tel
Aviv, IL) ; Odinaev; Karina; (Tel Aviv, IL) ;
Zeevi; Yehoshua Y.; (Haifa, IL) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Cortica Ltd. |
Tel Aviv |
|
IL |
|
|
Assignee: |
Cortica Ltd.
Tel Aviv
IL
|
Family ID: |
54065620 |
Appl. No.: |
16/721954 |
Filed: |
December 20, 2019 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
15206792 |
Jul 11, 2016 |
|
|
|
16721954 |
|
|
|
|
62289187 |
Jan 30, 2016 |
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04H 20/26 20130101;
G06K 2209/27 20130101; H04H 60/66 20130101; H04L 67/306 20130101;
H04N 7/17318 20130101; H04H 60/71 20130101; G06F 16/683 20190101;
H04N 21/466 20130101; G06F 3/0484 20130101; G06F 40/134 20200101;
G06Q 30/0201 20130101; G06F 16/2228 20190101; G06F 16/7847
20190101; G06F 16/9558 20190101; H04L 67/10 20130101; G06F 16/285
20190101; G06F 16/14 20190101; G06F 16/172 20190101; G06F 16/1748
20190101; G06F 16/487 20190101; H04N 21/25891 20130101; G06F 16/35
20190101; G06F 16/433 20190101; G09B 19/0092 20130101; H04H 60/56
20130101; G06F 16/438 20190101; G06N 5/04 20130101; G10L 25/51
20130101; H04N 21/8106 20130101; G06F 16/51 20190101; H04L 67/327
20130101; G06K 9/00758 20130101; G06N 5/022 20130101; G06F 16/7834
20190101; H04H 60/33 20130101; H04H 2201/90 20130101; G06F 16/783
20190101; G06F 16/904 20190101; H04H 20/93 20130101; H04N 21/2668
20130101; G06F 16/685 20190101; G06K 9/6267 20130101; H04H 60/37
20130101; H04H 60/49 20130101; G06F 16/951 20190101; G06N 5/025
20130101; G06N 7/005 20130101; H04L 67/22 20130101; G06F 3/048
20130101; G06F 16/152 20190101; G06Q 30/0246 20130101; H04H 60/58
20130101; Y10S 707/99948 20130101; G06K 9/00281 20130101; G10L
15/26 20130101; G06F 16/284 20190101; G06F 16/4393 20190101; G06K
9/00744 20130101; G10L 15/32 20130101; H04H 60/46 20130101; G06T
19/006 20130101; G06N 20/00 20190101; H04H 20/103 20130101; G06N
5/02 20130101; H04L 65/601 20130101; G06F 3/0488 20130101; G06F
16/40 20190101; G06F 16/41 20190101; G06F 16/435 20190101; H04H
60/59 20130101; Y10S 707/99943 20130101; G06Q 30/0261 20130101;
G06F 16/7844 20190101; G06K 9/00711 20130101; G06F 16/43 20190101;
G06F 16/434 20190101; G06F 16/48 20190101 |
International
Class: |
G06K 9/00 20060101
G06K009/00; G06F 16/41 20060101 G06F016/41 |
Claims
1. A method for creating a facial representation, comprising:
identifying, via at least one data source, at least one multimedia
content element; generating at least one signature for at least a
portion of each identified multimedia content element, wherein each
generated signature represents at least one facial concept;
analyzing the generated signatures to determine a cluster of
signatures of facial concepts; and generating, based on the cluster
of facial concept signatures, a facial representation.
2. The method of claim 1, wherein the facial representation
includes the determined cluster of signatures.
3. The method of claim 1, wherein the facial representation
includes a list of facial features.
4. The method of claim 1, further comprising: identifying a source
of each multimedia content element; and determining, based on each
identified source, a context of each of the at least one multimedia
content element, wherein the analysis of the generated signatures
is further based on the determined contexts.
5. The method of claim 1, further comprising: identifying metadata
associated with each multimedia content element, wherein the
analysis of the generated signatures is further based on the
identified metadata.
6. The method of claim 5, wherein the metadata associated with each
multimedia content element includes at least one of: a time pointer
associated with a capture of the multimedia content element, a time
pointer associated with an upload of the multimedia content
element, a location pointer associated with a capture of the
multimedia content element, a location pointer associated with an
upload of the multimedia content element, and a tag added to the
multimedia content element.
7. The method of claim 1, wherein each multimedia content element
is at least one of: an image, graphics, a video stream, a video
clip, an audio stream, an audio clip, a video frame, a photograph,
and an image of signals.
8. The method of claim 1, further comprising: associating the
facial representation with a user profile.
9. The method of claim 1, further comprising: querying a
deep-content-classification system to find a match between at least
one concept structure and the identified at least one multimedia
content element.
10. The method of claim 1, wherein each signature is generated by a
signature generator system, wherein the signature generator system
includes a plurality of computational cores configured to receive a
plurality of unstructured data elements, each computational core of
the plurality of computational cores having properties that are at
least partly statistically independent of other of the
computational cores, the properties are set independently of each
other core.
11. The method of claim 1, wherein each facial concept is a
collection of signatures representing at least one conceptually
related multimedia content element and metadata describing the
concept, wherein the collection of signatures is a signature
reduced cluster generated by inter-matching signatures generated
for the at least one multimedia content element.
12. A non-transitory computer readable medium having stored thereon
instructions for causing a processing system to perform a method
for generating a facial representation, wherein the instructions
cause the processing system to: identify, via at least one data
source, at least one multimedia content element; generate at least
one signature for at least a portion of each identified multimedia
content element, wherein each generated signature represents at
least one facial concept; analyze the generated signatures to
determine a cluster of signatures of facial concepts; and generate,
based on the cluster of facial concept signatures, a facial
representation.
13. A system for generating a facial representation, comprising: a
processing system; and a memory, wherein the memory contains
instructions that, when executed by the processing system,
configure the system to: identify, via at least one data source, at
least one multimedia content element; generate at least one
signature for at least a portion of each identified multimedia
content element, wherein each generated signature represents at
least one facial concept; analyze the generated signatures to
determine a cluster of signatures of facial concepts; and generate,
based on the cluster of facial concept signatures, a facial
representation.
14. The system of claim 13, wherein the facial representation
includes the determined cluster of signatures.
15. The system of claim 13, wherein the facial representation
includes a list of facial features.
16. The system of claim 13, wherein the system is further
configured to: identify a source of each multimedia content
element; and determine, based on each identified source, a context
of each of the at least one multimedia content element, wherein the
analysis of the generated signatures is further based on the
determined contexts.
17. The system of claim 13, wherein the system is further
configured to: identify metadata associated with each multimedia
content element, wherein the analysis of the generated signatures
is further based on the identified metadata.
18. The system of claim 17, wherein the metadata associated with
each multimedia content element includes at least one of: a time
pointer associated with a capture of the multimedia content
element, a time pointer associated with an upload of the multimedia
content element, a location pointer associated with a capture of
the multimedia content element, a location pointer associated with
an upload of the multimedia content element, and a tag added to the
multimedia content element.
19. The system of claim 13, wherein each multimedia content element
is at least one of: an image, graphics, a video stream, a video
clip, an audio stream, an audio clip, a video frame, a photograph,
and an image of signals.
20. The system of claim 13, wherein the system is further
configured to: associate the facial representation with a user
profile.
21. The system of claim 13, wherein the system is further
configured to: query a deep-content-classification system to find a
match between at least one concept structure and the identified at
least one multimedia content element.
22. The system of claim 13, further comprising: a signature
generator system for generating the at least one signature, wherein
the signature generator system includes a plurality of
computational cores configured to receive a plurality of
unstructured data elements, each computational core of the
plurality of computational cores having properties that are at
least partly statistically independent of other of the
computational cores, the properties are set independently of each
other core.
23. The system of claim 13, wherein each facial concept is a
collection of signatures representing at least one conceptually
related multimedia content element and metadata describing the
concept, wherein the collection of signatures is a signature
reduced cluster generated by inter-matching signatures generated
for the at least one multimedia content element.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application claims the benefit of U.S. Provisional
Patent Application No. 62/289,187 filed on Jan. 30, 2016. This
application is also a continuation-in-part (CIP) of U.S. patent
application Ser. No. 14/509,558 filed on Oct. 8, 2014, now pending,
which is a continuation of U.S. patent application Ser. No.
13/602,858 filed on Sep. 4, 2012, now U.S. Pat. No. 8,868,619. The
Ser. No. 13/602,858 application is a continuation of U.S. patent
application Ser. No. 12/603,123 filed on Oct. 21, 2009, now U.S.
Pat. No. 8,266,185. The Ser. No. 12/603,123 application is a
continuation-in-part of:
[0002] (1) U.S. patent application Ser. No. 12/084,150 having a
filing date of Apr. 7, 2009, now U.S. Pat. No. 8,655,801, which is
the National Stage of International Application No.
PCT/IL2006/001235 filed on Oct. 26, 2006, which claims foreign
priority from Israeli Application No. 171577 filed on Oct. 26,
2005, and Israeli Application No. 173409 filed on Jan. 29,
2006;
[0003] (2) U.S. patent application Ser. No. 12/195,863, filed Aug.
21, 2008, now U.S. Pat. No. 8,326,775, which claims priority under
35 USC 119 from Israeli Application No. 185414 filed on Aug. 21,
2007, and which is also a continuation-in-part of the
above-referenced U.S. patent application Ser. No. 12/084,150;
[0004] (3) U.S. patent application Ser. No. 12/348,888, filed on
Jan. 5, 2009, now pending, which is a CIP of the above-referenced
U.S. patent application Ser. Nos. 12/084,150 and 12/195,863;
and
[0005] (4) U.S. patent application Ser. No. 12/538,495 filed on
Aug. 10, 2009, now U.S. Pat. No. 8,312,031, which is a CIP of the
above-referenced U.S. patent application Ser. Nos. 12/084,150,
12/195,863, and 12/348,888.
[0006] All of the applications referenced above are herein
incorporated by reference for all that they contain.
TECHNICAL FIELD
[0007] The present disclosure relates generally to the analysis of
multimedia content, and more specifically to generating facial
representations of a user associated with a user device.
BACKGROUND
[0008] With the abundance of data made available through various
means in general and the Internet and world-wide web (WWW) in
particular, attracting users to content has become essential for
online businesses. To effectively attract users to their content,
such online businesses must be capable of recognizing and adapting
to user preferences. Accordingly, there is a need to understand the
preferences of users.
[0009] Existing solutions provide several tools to identify users'
preferences. Some existing solutions actively require an input from
the users to specify their interests. However, profiles generated
for users based on their inputs may be inaccurate, as the users
tend to provide only their current interests, or otherwise only
provide partial information due to privacy concerns. For example, a
user submitting a list of musical interests for a social media
account may include only recently listened to bands or songs rather
than all bands or songs the user enjoys.
[0010] Other existing solutions passively track the users' activity
through particular web sites such as social networks. The
disadvantage with such solutions is that typically limited
information regarding the users is revealed, as users tend to
provide only partial information due to privacy concerns. For
example, users creating an account on Facebook.RTM. provide in most
cases only the minimum information required for the creation of the
account. Additional information about such users may be collected
over time, but may take significant amounts of time (i.e., gathered
via multiple social media or blog posts over a time period of weeks
or months) to be useful for accurate identification of user
preferences.
[0011] Further, many entities offer services allowing users to
create customized user profiles. Such customized user profiles
allow users to express their interests, personalities, and
appearances. To this end, the customized user profiles often allow
users to create facial or other image-based representations or
avatars. Some image-based representations may be made based on user
inputs (e.g., selections of body types, body parts, skin color,
etc.), while other representations may be automatically created
based on images of a user. Existing solutions for automatically
creating facial or other representations of users face challenges
in analyzing and representing combinations of facial features
unique to particular users.
[0012] It would be therefore advantageous to provide a solution
that overcomes the deficiencies of the existing solutions.
SUMMARY
[0013] A summary of several example embodiments of the disclosure
follows. This summary is provided for the convenience of the reader
to provide a basic understanding of such embodiments and does not
wholly define the breadth of the disclosure. This summary is not an
extensive overview of all contemplated embodiments, and is intended
to neither identify key or critical elements of all embodiments nor
to delineate the scope of any or all aspects. Its sole purpose is
to present some concepts of one or more embodiments in a simplified
form as a prelude to the more detailed description that is
presented later. For convenience, the term "some embodiments" may
be used herein to refer to a single embodiment or multiple
embodiments of the disclosure.
[0014] The embodiments disclosed herein include a method for
generating a facial representation. The method includes
identifying, via at least one data source, at least one multimedia
content element; generating at least one signature for at least a
portion of each identified multimedia content element, wherein each
generated signature represents at least one facial concept;
analyzing the generated signatures to determine a cluster of
signatures of facial concepts; and generating, based on the cluster
of facial concept signatures, a facial representation of the
user.
[0015] The embodiments disclosed herein also include a
non-transitory computer readable medium having stored thereon
instructions for causing a processing system to perform a method
for generating a facial representation, wherein the instructions
cause the processing system to: identify, via at least one data
source, at least one multimedia content element; generate at least
one signature for at least a portion of each identified multimedia
content element, wherein each generated signature represents at
least one facial concept; analyze the generated signatures to
determine a cluster of signatures of facial concepts; and generate,
based on the cluster of facial concepts, a facial
representation.
[0016] The embodiments disclosed herein also include a system for
generating a facial representation. The system comprises: a
processing system; and a memory, wherein the memory contains
instructions that, when executed by the processing system,
configure the system to: identify, via at least one data source, at
least one multimedia content element; generate at least one
signature for at least a portion of each identified multimedia
content element, wherein each generated signature represents at
least one facial concept; analyze the generated signatures to
determine a cluster of signatures of facial concepts; and generate,
based on the cluster of facial concepts, a facial.
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The subject matter disclosed herein is particularly pointed
out and distinctly claimed in the claims at the conclusion of the
specification. The foregoing and other objects, features, and
advantages of the disclosed embodiments will be apparent from the
following detailed description taken in conjunction with the
accompanying drawings.
[0018] FIG. 1 is a network diagram utilized to describe the various
embodiments disclosed herein.
[0019] FIG. 2 is a flowchart illustrating a method for generating a
user profile including a facial representation of a user according
to an embodiment.
[0020] FIG. 3 is a flowchart illustrating a method for analyzing a
plurality of multimedia content elements according to an
embodiment.
[0021] FIG. 4 is a block diagram depicting the basic flow of
information in the signature generator system.
[0022] FIG. 5 is a diagram showing the flow of patches generation,
response vector generation, and signature generation in a
large-scale speech-to-text system.
[0023] FIG. 6 is a flowchart illustrating a method for determining
a context based on multimedia content elements according to an
embodiment.
[0024] FIG. 7 is a block diagram of an interest analyzer according
to an embodiment.
DETAILED DESCRIPTION
[0025] It is important to note that the embodiments disclosed
herein are only examples of the many advantageous uses of the
innovative teachings herein. In general, statements made in the
specification of the present application do not necessarily limit
any of the various claimed embodiments. Moreover, some statements
may apply to some features but not to others. In general, unless
otherwise indicated, singular elements may be in plural and vice
versa with no loss of generality. In the drawings, like numerals
refer to like parts through several views.
[0026] The disclosed embodiments include a system and method for
generating a representation of a face of a user based on multimedia
content elements. Upon receiving access to one or more data sources
associated with the user's device, a plurality of multimedia
content elements existing therein may be identified. At least one
signature is generated for each multimedia content element. Each of
the generated signatures represents a concept. The concepts are
correlated with respect to the generated signatures to determine a
cluster of facial concepts associated with the user. A facial
representation of the user is generated based on the correlation.
In an embodiment, the facial representation includes a list of
facial features or at least one signature.
[0027] According to another embodiment, the data source from which
each multimedia content element was extracted may be identified.
According to a further embodiment, metadata associated with each
multimedia content element may be identified. According to yet
another embodiment, a user profile including the facial
representation may be generated, and the generated user profile may
be stored in a data storage unit accessible by a server, the user
device, a search engine, a combination thereof, and so on.
[0028] FIG. 1 shows an example network diagram 100 utilized to
describe the various disclosed embodiments. A network 110 is used
to communicate between different parts of the network diagram 100.
The network 110 may be the Internet, the world-wide-web (WWW), a
local area network (LAN), a wide area network (WAN), a metro area
network (MAN), and other networks capable of enabling communication
between elements of the network diagram 100.
[0029] Further communicatively connected to the network 110 is a
user device (UD) 120. The user device 120 may be, but is not
limited to, a personal computer (PC), a personal digital assistant
(PDA), a mobile phone, a smart phone, a tablet computer, an
electronic wearable device (e.g., glasses, a watch, etc.), a smart
television, or any other wired or mobile device equipped with
browsing, viewing, capturing, storing, listening, filtering, and
managing capabilities enabled as further discussed herein
below.
[0030] The user device 120 may further include an interest analyzer
125 installed thereon. The interest analyzer 125 may be a dedicated
application, script, or any program code stored in a memory of the
user device 120 and is executable, for example, by a processing
system (e.g., microprocessor) of the user device 120. The interest
analyzer 125 may be pre-installed in the user device 120. In a
non-limiting embodiment, the interest analyzer 125 may be
downloaded from an application repository (not shown) such as, for
example, the AppStore.RTM., Google Play.RTM., or any repositories
hosting software applications. The interest analyzer 125 may be
configured to perform some or all of the processes performed by a
server 130 and disclosed herein. Specifically, in an embodiment,
the interest analyzer 125 may be configured to, e.g., generate
facial representations. In an embodiment, the user device 120
includes a local storage 127 for storing multimedia content
elements, concepts, signatures of multimedia content elements, or a
combination thereof. It should be noted that only one user device
120 and one interest analyzer 125 are discussed with respect to
FIG. 1 merely for the sake of simplicity and without limitation on
the disclosure.
[0031] Optionally communicatively connected to the network 110 is a
data warehouse 150 for storing multimedia content elements
associated with a user of the user device. According to an
embodiment, the data warehouse 150 may be associated with a social
networking website or entity utilized by a user of the user device
120. According to another embodiment, the data warehouse 150 may be
a cloud-based storage accessible by the user device 120. In the
embodiment illustrated in FIG. 1, either or both of the user device
120 and a server 130 communicates with the data warehouse 150
through the network 110. Such communication may be subject to an
approval to be received from the user device 120.
[0032] In an embodiment, either or both of the user device 120 and
the server 130 is further communicatively connected to a signature
generator system (SGS) 140 and to a deep-content classification
(DCC) system 170 through the network 110. In another embodiment,
each of the DCC system 170 and the SGS 140 may be embedded in the
server 130 or the user device 120. In a further embodiment, the SGS
140 may further include a plurality of computational cores
configured for signature generation, where each computational core
is at least partially statistically independent from the other
computational cores. It should be noted that each of the user
device 120 and the server 130 typically includes a processing
system (not shown) coupled to a memory (not shown). The memory
contains instructions that can be executed by the processing
system.
[0033] According to an embodiment, upon receiving access to one or
more storage units associated with the user of the user device 120,
the server 130 is configured to identify multimedia content
elements stored therein. The storage units may be the web sources
160, the local storage 127 of the user device 120, both, or any
other storage including multimedia content elements associated with
a user of the user device 120. A multimedia content element may be,
but is not limited to, an image, a graphic, a video stream, a video
clip, an audio stream, an audio clip, a video frame, a photograph,
an image of signals (e.g., spectrograms, phasograms, scalograms,
etc.), a combination thereof, or a portion thereof.
[0034] Alternatively, according to the disclosed embodiments, the
server 130 is configured to receive multimedia content elements
from the user device 120 accompanied by a request to generate a
user profile respective thereof. With this aim, the server 130
sends each received multimedia content element to the SGS 140, to
the DCC system 170, or to both. The decision which is used (e.g.,
by the SGS 140, the DCC system 170, or both) may be a default
configuration or based on the request.
[0035] In an embodiment, the SGS 140 receives a multimedia content
element and returns at least one signature for the received
multimedia content element. The generated signature(s) may be
robust to noise and distortion. To this end, the SGS 140 may
include a plurality of computational cores, where each
computational core is at least partially statistically independent
of the other computational cores. The process for generating the
signatures is discussed in detail herein below.
[0036] The SGS 140 may send the generated signature(s) to the
server 130. Based on the generated signature(s), the server 130 is
configured to search for similar multimedia content elements in a
data warehouse. The process of matching between multimedia content
elements is discussed in detail below with respect to FIGS. 4 and
5.
[0037] The server 130 is configured to analyze the similar
multimedia content elements found during the search with respect to
the signatures in order to determine a cluster of facial concepts
associated with the user of the user device 120. The analysis may
include identification of the source in which each multimedia
content element was identified. The sources from which the
multimedia content elements were identified may be relevant in
determining whether each multimedia content element shows the
user's face or facial features. As an example, an image captured by
the user device 120 using a camera located in the screen-side of
the user device 120 is more likely to include a facial image of the
user than an image extracted from a web source associated with a
social network in which the user has an account in (such an image
may feature, for example, the user's entire body, with only a small
portion of the image illustrating the user's face).
[0038] According to another embodiment, metadata associated with
each multimedia content element may by identified by the server
130. The metadata may include, but is not limited to, a time
pointer associated with the capture or upload of each multimedia
content element, a location pointer associated with the capture or
upload of each multimedia content element, one or more tags added
to each multimedia content element, a combination thereof, and so
on.
[0039] In a further embodiment, such metadata may be analyzed, and
the results of the metadata analysis may be utilized to, e.g.,
determine whether the multimedia content element descriptive of
optimally descriptive of the user's facial features. As an example,
a photo taken at 11:00 PM in an outdoor park may be determined not
to be optimally descriptive of a user's face or facial features. As
another example, a photo associated with the tag "self ie" may be
determined to be optimally descriptive of the user's facial
features.
[0040] According to another embodiment, the analysis of the
received multimedia content element may further be based on a
concept structure (hereinafter referred to as "concept"). A concept
is a collection of signatures representing elements of the
unstructured data and metadata describing the concept. The concept
may be a signature-reduced cluster of related signatures. As a
non-limiting example, a `Superman concept` is a signature-reduced
cluster of signatures describing elements (e.g., multimedia
elements) related to, e.g., a Superman cartoon: a set of metadata
representing proving textual representation of the Superman
concept. Techniques for generating concept structures are also
described in the above-referenced U.S. Pat. No. 8,266,185, assigned
to the common assignee, which is hereby incorporated by reference
for all that it contains.
[0041] According to a further embodiment, a query is sent to the
DCC system 170 to match the received multimedia content element to
at least one concept. The identification of a concept matching the
received multimedia content element includes matching at least one
signature generated for the received multimedia content element
(e.g., signatures generated either by the SGS 140 or by the DCC
system 170) and comparing the element's signatures to signatures
representing a concept structure. The matching can be performed
across all concept structures maintained by the system DCC 170.
[0042] It should be noted that, if the query sent to the DCC system
170 results in matching multiple concept structures, a correlation
for matching concept structures is performed to generate a facial
representation of a user that best describes the user's face. The
correlation can be achieved by identifying a ratio between
signatures' sizes, a spatial location of each signature, using
probabilistic models, or a combination thereof. The facial
representation may be, but is not limited to, a list of features,
at least one representative signature representing facial features,
and the like.
[0043] Based on the matching concept structures, the server 130 is
configured to generate a facial representation of the user. In an
embodiment, the facial representation may include a cluster of
signatures or the list of facial features. In an embodiment,
generating the facial representation may include generating a
cluster of signatures representing facial concepts associated with
the user of the user device 120 based on the at least one
representative signature. The facial concepts include concept
structures related to facial features such as, but not limited to,
eyes, hair, mouth, nose, eyebrows, forehead, ears, cheeks,
forehead, facial hair, and the like.
[0044] In another embodiment, generating the facial representation
may include determining textual representations of facial features
of the user and generating a list based thereon. The determined
textual representations may include textual multimedia content
elements associated with any or all of the at least one
representative signature.
[0045] The facial representation may be generated based on
multimedia content elements that are determined as optimally
describing the face of the user. For example, the optimally
descriptive multimedia content elements may include images of, but
not limited to, a nose, hair, eyes, a mouth, facial hair, eyebrows,
a forehead, cheeks, a chin, birth marks, and the like. The facial
representation may be sent for storage in, for example, the data
warehouse 150.
[0046] To this end, in an embodiment, generating the facial
representation may include analyzing the multimedia content
elements featuring the face of the user and determining, based on
the analysis, the optimally descriptive multimedia content
elements. In a further embodiment, the analysis may be based on the
analysis of the signatures of the multimedia content elements
featuring the face of the user.
[0047] In an embodiment, determining whether each multimedia
content element is an optimally descriptive multimedia content
element may be based on, but not limited to, a size of the face
relative to the entire multimedia content, a number of facial
features illustrated by the multimedia content element, an angle of
the view of the face, whether a portion or all of the face is
obstructed or otherwise unclear, a combination thereof, and the
like. The optimally descriptive multimedia contents may be
determined based on one or more thresholds such as, but not limited
to, a pixel threshold, a threshold number of facial features, a
threshold angle, a combination thereof, and the like. As noted
above, in a further embodiment, determining whether a multimedia
content element is optimally descriptive may be further based on
metadata associated with the multimedia content element.
[0048] As a non-limiting example for determining optimally
descriptive multimedia content elements, an image in which only the
middle portion of a forehead of the user is shown may be determined
to not be an optimally descriptive multimedia content element
because no entire facial feature is shown. As another non-limiting
example, a portion of a video showing a close up shot of the front
of the user's face including eyes, a nose, a mouth, and a beard may
be determined to be an optimally descriptive multimedia content
element. As yet another non-limiting example, an image showing a
back isometric view of the user's face may be determined as not
being an optimally descriptive multimedia content element.
[0049] It should be noted that certain tasks performed by the
server 130, the SGS 140, and the DCC system 170 may be carried out,
alternatively or collectively, by the user device 120 and the
interest analyzer 125. Specifically, in an embodiment, signatures
may be generated by a signature generator (e.g., the signature
generator 710 discussed further herein below with respect to FIG.
7). An example block diagram of an interest analyzer 125 installed
on a user device 120 is described further herein below with respect
to FIG. 7.
[0050] It should also be noted that the signatures may be generated
for multimedia content elements stored in the data sources 150, in
the local storage 127 of the user device 120, or in a combination
thereof.
[0051] FIG. 2 depicts an example flowchart 200 illustrating a
method for generating a user profile including a facial
representation according to an embodiment. In an embodiment, the
method may be performed by a server (e.g., the server 130). In
another embodiment, the method may be performed by an interest
analyzer (e.g., the interest analyzer 125 installed on the user
device 120).
[0052] At S210, multimedia content elements are identified through
data sources associated with a user of a user device. The
multimedia content elements may be identified based on a request
for creating a user profile. The request may indicate, for example,
particular multimedia content elements to be identified, data
sources in which the multimedia content elements may be identified,
metadata tags of multimedia content elements to be identified,
combinations thereof, and the like. The data sources may include,
but are not limited to, web sources (e.g., the web sources 160), a
local storage (e.g., the local storage 127 of the user device 120
or a local storage associated with the server 130), a combination
thereof, and the like.
[0053] In a further embodiment, S210 may include pre-filtering
multimedia content elements that are unrelated to the user's face
or to faces generally. To this end, S210 may further include
analyzing metadata tags associated with multimedia content elements
in the data sources to identify multimedia content elements
featuring the user's face. As a non-limiting example, if tags
associated with a multimedia content element indicate that the
multimedia content element does not show a person or, in
particular, does not show the user, the multimedia content element
may be pre-filtered out. The pre-filtering may reduce subsequent
usage of computational resources due to, e.g., signature
generation, concept correlation, and the like.
[0054] At S220, at least one signature is generated for each
identified multimedia content element. In an embodiment, S220 may
include generating a signature for portions of any or all of the
multimedia content elements. Each signature represents a concept
associated with the multimedia content element. For example, a
signature generated for a multimedia content element featuring a
man in a costume may represent at least a "Batman.RTM." concept.
The signature(s) are generated by a signature generator (e.g., the
SGS 140 or the signature generator 710) as described herein below
with respect to FIGS. 4 and 5.
[0055] At S230, the identified multimedia content elements are
analyzed based on the signatures. In an embodiment, the analysis
includes determining a context of the identified multimedia content
elements related to the user's face. In a further embodiment, the
analysis includes determining, based on the context, multimedia
content elements that optimally describe the user's face and
generating a cluster including signatures representing the
optimally descriptive multimedia content elements. Determining
contexts of multimedia content elements based on signatures is
described further herein below with respect to FIG. 3.
[0056] At S240, based on the analysis of the multimedia content
elements, a facial representation of the user of the user device is
generated. In an embodiment, generating the facial representation
may include generating a cluster of signatures including signatures
associated with multimedia content elements that optimally describe
the face of the user as described further herein above with respect
to FIG. 1.
[0057] In another embodiment, generating the facial representation
may include filtering out multimedia content elements or portions
thereof that are not related to the user's face. In yet another
embodiment, generating the facial representation may include
determining, based on the optimally descriptive multimedia content
elements, a list of facial features. The list of facial features
may include a plurality of textual multimedia content elements
associated with any of the optimally descriptive multimedia content
elements.
[0058] At S250, the facial representation is associated with a user
profile of the user of the user device. In an embodiment, S250
includes creating a user profile and associating the facial
representation with the generated user profile. In a further
embodiment, creating the user profile may include analyzing a
plurality of multimedia content elements associated with the user
to determine information related to the user such as, for example,
interests of the user, contacts of the user (e.g., friends, family,
and acquaintances), events the user has attended, a profession of
the user, and the like. An example method and system for creating
user profiles based on analysis of multimedia content elements is
described further in U.S. patent application Ser. No. 15/206,711,
assigned to the common assignee, which is hereby incorporated by
reference for all that it contains.
[0059] At S260, the generated user profile is sent for storage in a
storage such as, for example, the data warehouse 150.
[0060] FIG. 3 depicts an example flowchart S230 illustrating a
method for analyzing a plurality of multimedia content elements and
determining contexts of the multimedia content elements according
to an embodiment. In an embodiment, the method is performed using
signatures generated for the multimedia content elements by a
signature generator system.
[0061] At S310, at least one concept structure matching the
multimedia content elements is identified. In an embodiment, the
concept structure is identified based on the signatures of the
multimedia content elements. In a further embodiment, S310 may
include querying a DCC system (e.g., the DCC system 170) using the
signatures generated for the multimedia content elements. The
metadata of the matching concept structure is used for correlation
between a first multimedia content element and at least a second
multimedia content element of the plurality of multimedia content
elements.
[0062] At optional S320, a source of each multimedia content
element is identified. As further described hereinabove, the source
of each multimedia content element may be indicative of the content
or the context of the multimedia content element. In an embodiment,
S320 may further include determining, based on the source of each
multimedia content element, at least one potential context of the
multimedia content element. In a further embodiment, each source
may be associated with a plurality of potential contexts of
multimedia content elements. As a non-limiting example, for a
multimedia content stored in a source including video clips of
basketball games, potential contexts may include, but are not
limited to, "basketball," "the Chicago Bulls.RTM.," "the Golden
State Warriors.RTM.," "the Cleveland Cavaliers.RTM.," "NBA,"
"WNBA," "March Madness," and the like.
[0063] At optional S330, metadata associated with each multimedia
content element is identified. The metadata may include, for
example, a time pointer associated with the capture or upload of
each multimedia content element, a location pointer associated the
capture or upload of each multimedia content element, one or more
tags added to each multimedia content element, a combination
thereof, and so on.
[0064] At S340, a context of the multimedia content elements is
determined. In an embodiment, the context may be determined based
on the correlation between a plurality of concepts related to
multimedia content elements. The context may be further based on
relationships between the multimedia content elements. Determining
contexts of multimedia content elements based on concepts is
described further herein below with respect to FIG. 6.
[0065] At S350, based on the determined context, a cluster
including signatures related to multimedia content elements that
optimally describe the user's face is generated. In an embodiment,
S350 includes matching the generated signatures to a signature
representing the determined context. Signatures matching the
context signature above a predefined threshold may be determined to
represent multimedia content elements that optimally describe the
user's face. In a further embodiment, the cluster may be a
signature reduced cluster.
[0066] FIGS. 4 and 5 illustrate the generation of signatures for
the multimedia content elements by the SGS 140 according to one
embodiment. An example high-level description of the process for
large scale matching is depicted in FIG. 4. In this example, the
matching is for a video content.
[0067] Video content segments 2 from a Master database (DB) 6 and a
Target DB 1 are processed in parallel by a large number of
independent computational Cores 3 that constitute an architecture
for generating the Signatures (hereinafter the "Architecture").
Further details on the computational Cores generation are provided
below. The independent Cores 3 generate a database of Robust
Signatures and Signatures 4 for Target content-segments 5 and a
database of Robust Signatures and Signatures 7 for Master
content-segments 8. An example process of signature generation for
an audio component is shown in detail in FIG. 4. Finally, Target
Robust Signatures and/or Signatures are effectively matched, by a
matching algorithm 9, to Master Robust Signatures and/or Signatures
database to find all matches between the two databases.
[0068] To demonstrate an example of the signature generation
process, it is assumed, merely for the sake of simplicity and
without limitation on the generality of the disclosed embodiments,
that the signatures are based on a single frame, leading to certain
simplification of the computational cores generation. The Matching
System is extensible for signatures generation capturing the
dynamics in-between the frames. In an embodiment the server 130 is
configured with a plurality of computational cores to perform
matching between signatures.
[0069] The Signatures' generation process is now described with
reference to FIG. 5. The first step in the process of signatures
generation from a given speech-segment is to breakdown the
speech-segment to K patches 14 of random length P and random
position within the speech segment 12. The breakdown is performed
by the patch generator component 21. The value of the number of
patches K, random length P and random position parameters is
determined based on optimization, considering the tradeoff between
accuracy rate and the number of fast matches required in the flow
process of the server 130 and SGS 140. Thereafter, all the K
patches are injected in parallel into all computational Cores 3 to
generate K response vectors 22, which are fed into a signature
generator system 23 to produce a database of Robust Signatures and
Signatures 4.
[0070] In order to generate Robust Signatures, i.e., Signatures
that are robust to additive noise L (where L is an integer equal to
or greater than 1) by the Computational Cores 3 a frame `i` is
injected into all the Cores 3. Then, Cores 3 generate two binary
response vectors: {right arrow over (S)} which is a Signature
vector, and {right arrow over (RS)} which is a Robust Signature
vector.
[0071] For generation of signatures robust to additive noise, such
as White-Gaussian-Noise, scratch, etc., but not robust to
distortions, such as crop, shift and rotation, etc., a core
Ci={n.sub.i} (1.ltoreq.i.ltoreq.L) may consist of a single leaky
integrate-to-threshold unit (LTU) node or more nodes. The node
n.sub.i equations are:
V i = j w ij k j ##EQU00001## n i = .theta. ( Vi - Th x )
##EQU00001.2##
[0072] where, .theta. is a Heaviside step function; w.sub.ij is a
coupling node unit (CNU) between node i and image component j (for
example, grayscale value of a certain pixel j); kj is an image
component `j` (for example, grayscale value of a certain pixel j);
Th.sub.X is a constant Threshold value, where `x` is `S` for
Signature and `RS` for Robust Signature; and Vi is a Coupling Node
Value.
[0073] The Threshold values Th.sub.X are set differently for
Signature generation and for Robust Signature generation. For
example, for a certain distribution of Vi values (for the set of
nodes), the thresholds for Signature (Th.sub.S) and Robust
Signature (Th.sub.RS) are set apart, after optimization, according
to at least one of the following criteria:
[0074] 1: For: V.sub.i>Th.sub.RS
[0075] 1-p(V>Th.sub.S)-1-(1-.epsilon.).sub.l>>1
i.e., given that l nodes (cores) constitute a Robust Signature of a
certain image I, the probability that not all of these I nodes will
belong to the Signature of same, but noisy image, is sufficiently
low (according to a system's specified accuracy).
[0076] 2: p(V.sub.i>Th.sub.RS).apprxeq.l/L
i.e., approximately l out of the total L nodes can be found to
generate a Robust Signature according to the above definition.
[0077] 3: Both Robust Signature and Signature are generated for
certain frame i.
[0078] It should be understood that the generation of a signature
is unidirectional, and typically yields lossless compression, where
the characteristics of the compressed data are maintained but the
uncompressed data cannot be reconstructed. Therefore, a signature
can be used for the purpose of comparison to another signature
without the need of comparison to the original data. The detailed
description of the Signature generation can be found in U.S. Pat.
Nos. 8,326,775 and 8,312,031, assigned to the common assignee,
which are hereby incorporated by reference for all that they
contain.
[0079] A Computational Core generation is a process of definition,
selection, and tuning of the parameters of the cores for a certain
realization in a specific system and application. The process is
based on several design considerations, such as:
[0080] (a) The Cores should be designed so as to obtain maximal
independence, i.e., the projection from a signal space should
generate a maximal pair-wise distance between any two cores'
projections into a high-dimensional space.
[0081] (b) The Cores should be optimally designed for the type of
signals, i.e., the Cores should be maximally sensitive to the
spatio-temporal structure of the injected signal, for example, and
in particular, sensitive to local correlations in time and space.
Thus, in some cases a core represents a dynamic system, such as in
state space, phase space, edge of chaos, etc., which is uniquely
used herein to exploit their maximal computational power.
[0082] (c) The Cores should be optimally designed with regard to
invariance to a set of signal distortions, of interest in relevant
applications.
[0083] A detailed description of the Computational Core generation
and the process for configuring such cores is discussed in more
detail in the above-noted U.S. patent application Ser. No.
12/084,150.
[0084] FIG. 6 is an example flowchart S340 illustrating a method
for determining a context of a plurality of multimedia content
elements based on concepts according to an embodiment.
[0085] At S610, a plurality of multimedia content elements is
identified. The identified multimedia content elements may be
received from, e.g., a user device, or retrieved from, e.g., a data
warehouse.
[0086] At S620, at least one signature is identified for each of
the multimedia content elements. In an embodiment, each signature
may be generated as described further herein above with respect to
FIGS. 4 and 5. It should also be noted that any of the signatures
may be generated based on a portion of a multimedia content
element.
[0087] At S630, the generated signatures are analyzed to determine
a correlation between the signatures of the multimedia content
elements or portions thereof. In an embodiment, S630 includes
determining correlations between concepts of the multimedia content
elements. In a further embodiment, the correlations between
concepts are determined by identifying a ratio between signatures'
sizes, a spatial location of each signature, and so on using
probabilistic models. Each signature represents a concept and is
generated for a multimedia content element. Thus, identifying, for
example, the ratio of signatures' sizes may also indicate the ratio
between the size of their respective multimedia elements.
[0088] At S640, based on the analysis of the generated signatures,
a context of the plurality of multimedia content elements is
determined. In an embodiment, it may further be determined whether
the context is a strong context.
[0089] A context is determined as the correlation between a
plurality of concepts. A strong context is determined when there
are multiple concepts, i.e., a plurality of concepts that satisfy
the same predefined condition. As an example, signatures generated
for multimedia content elements of a smiling child with a Ferris
wheel in the background are analyzed. The concept of the signature
of the smiling child is "amusement" and the concept of a signature
of the Ferris wheel is "amusement park". The relationship between
the signatures of the child and of the Ferris wheel may be further
analyzed to determine that the Ferris wheel is bigger than the
child. The relation analysis results in a determination that the
Ferris wheel is used to entertain the child. Therefore, the
determined context may be "amusement."
[0090] According to an embodiment, one or more typically
probabilistic models may be utilized to determine the correlation
between signatures representing concepts. The probabilistic models
determine, for example, the probability that a signature may appear
in the same orientation and in the same ratio as another signature.
The analysis may be further based on previously analyzed
signatures.
[0091] In another embodiment, the context can be determined further
based on a ratio of the sizes of the objects in the multimedia
content elements and their relative spatial orientations (i.e.,
position, arrangement, direction, combinations thereof, and the
like). For example, based on an image containing multimedia content
elements related to bears having different sizes, a context may be
determined as "family of bears." As another example, based on an
image containing multimedia content elements of people facing the
same direction (toward a camera) and having similar sizes as well
as a banner for a school saying "graduation," a context may be
determined as "graduation photograph."
[0092] At S650, the determined context is stored in, e.g., the data
warehouse 150.
[0093] As a non-limiting example, a plurality of multimedia content
elements contained in an image is identified. According to this
example, multimedia content elements of the singer "Adele", "red
carpet", and a "Grammy" award are shown in the image. Signatures
are generated for each of the multimedia content elements. The
correlation between "Adele", "red carpet", and a "Grammy" award is
determined with respect to the signatures and the context of the
image is determined based on the correlation. According to this
example, such a context may be "Adele Winning the Grammy Award".
The determined context is stored in a data warehouse.
[0094] As another non-limiting example, multimedia content elements
related to objects such as a "glass", a "cutlery", and a "plate"
are identified. Signatures are generated for the glass, cutlery,
and plate multimedia content elements. The correlation between the
concepts represented by the signatures is determined based on
previously analyzed signatures of glasses, cutlery, and plates.
According to this example, as all of the concepts related to the
"glass", the "cutlery", and the "plate" satisfy the same predefined
condition, a strong context is determined. Based on the correlation
among the multimedia content elements and the relative sizes and
orientations of the objects illustrated by the multimedia content
elements, the context of such concepts is determined to be a "table
set".
[0095] FIG. 7 depicts an example block diagram of an interest
analyzer 125 installed on the user device 120 according to an
embodiment. The interest analyzer 125 may be configured to access
an interface of a user device or of a server. The interest analyzer
125 is further communicatively connected to a processing system
(PS, not shown) such as a processor and to a memory (mem). The
memory contains therein instructions that, when executed by the
processing system, configures the interest analyzer 125 as further
described hereinabove and below. The interest analyzer 125 may
further be communicatively connected to a storage unit (e.g., the
local storage 127 of the user device 120 or a storage of the server
130) including a plurality of multimedia content elements.
[0096] In an embodiment, the interest analyzer 125 includes a
signature generator (SG) 710, a data storage (DS) 720, a
recommendations engine 730, and a facial recognizer (FR) 740. The
signature generator 710 may be configured to generate signatures
for multimedia content elements. In a further embodiment, the
signature generator 710 includes a plurality of computational cores
as discussed further herein above, where each computational core is
at least partially statistically independent of the other
computations cores.
[0097] The data storage 720 may store a plurality of multimedia
content elements, a plurality of concepts, signatures for the
multimedia content elements, signatures for the concepts, or a
combination thereof. In a further embodiment, the data storage 720
may include a limited set of concepts relative to a larger set of
known concepts. Such a limited set of concepts may be utilized
when, for example, the data storage 720 is included in a device
having a relatively low storage capacity such as, e.g., a
smartphone or other mobile device.
[0098] The recommendations engine 730 may be configured to generate
contextual insights based on multimedia content elements related to
the user interest, to query sources of information (including,
e.g., the data storage 720 or another data source), and to cause a
display of recommendations on the user device 120.
[0099] According to an embodiment, the interest analyzer 125 is
configured to receive at least one multimedia content element. The
interest analyzer 125 is configured to initialize a signatures
generator (SG) 710 to generate at least one signature for the
received at least one multimedia content element.
[0100] In an embodiment, the interest analyzer 125 is configured to
initialize the facial recognizer 740 to generate a facial
representation. The facial representation may be generated based on
signatures generated for the received at least one multimedia
content element. To this end, the facial recognizer 740 may be
configured to analyze the received at least one multimedia content
element based on the generated signatures. Based on the analysis,
the facial recognizer 740 is configured to determine optimally
descriptive multimedia content elements representing facial
features as described further herein above. Based on the optimally
descriptive multimedia content elements, the facial recognizer 740
is configured to generate the facial representation. In an
embodiment, the facial representation may include a plurality or
cluster of signatures associated with the optimally descriptive
multimedia content elements. In another embodiment, the facial
representation may include a list of facial features illustrated by
the optimally descriptive multimedia content elements.
[0101] In an embodiment, the memory further contains instructions
to query a user profile of the user stored in a data storage (DS)
720 to determine a user interest. The memory further contains
instructions to generate a contextual insight based on the user
interest and the at least one signature. Based on the contextual
insight, a recommendations engine 730 is initialized to search for
one or more content items that match the contextual insight. The
matching content items may be provided by the recommendations
engine 730 to the user as recommendations via the interface.
[0102] Each of the recommendations engine 730 and the signature
generator 710 can be implemented with any combination of
general-purpose microprocessors, multi-core processors,
microcontrollers, digital signal processors (DSPs), field
programmable gate array (FPGAs), programmable logic devices (PLDs),
controllers, state machines, gated logic, discrete hardware
components, dedicated hardware finite state machines, or any other
suitable entities that can perform calculations or other
manipulations of information.
[0103] In certain implementations, the recommendation engine 730,
the signature generator 710, or both can be implemented using an
array of computational cores having properties that are at least
partly statistically independent from other cores of the plurality
of computational cores. The computational cores are further
discussed below.
[0104] According to another implementation the processes performed
by the recommendation engine 730, the signature generator 710, or
both can be executed by a processing system of the user device 120
or server 130. Such processing system may include machine-readable
media for storing software. Software shall be construed broadly to
mean any type of instructions, whether referred to as software,
firmware, middleware, microcode, hardware description language, or
otherwise. Instructions may include code (e.g., in source code
format, binary code format, executable code format, or any other
suitable format of code). The instructions, when executed by the
one or more processors, cause the processing system to perform the
various functions described herein.
[0105] It should be noted that, although FIG. 7 is described with
respect to an interest analyzer 125 included in the user device
120, any or all of the components of the interest analyzer 125 may
be included in another system or systems (e.g., the server 130, the
signature generator system 140, or both) and utilized to perform
some or all of the tasks described herein without departing from
the scope of the disclosure. As an example, the interest analyzer
125 operable in the user device 120 may send multimedia content
elements to the signature generator system 140 and may receive
corresponding signatures therefrom. As another example, the user
device 120 may send signatures to the server 130 and may receive
corresponding recommendations or concepts therefrom. As yet another
example, the interest analyzer 125 may be included in the server
130 and may provide recommendations to the user device 120 based on
multimedia content elements identified by or received from the user
device 120.
[0106] It should be noted that various embodiments described herein
above are discussed with respect to a user and, in particular, a
user of a user device, merely for simplicity purposes and without
limitation on the disclosed embodiments. The embodiments described
herein are equally applicable to any entity (e.g., humans, animals,
toys, etc.) having distinguishing facial characteristics that may
be illustrated by multimedia content elements regardless of whether
such entity is the owner or otherwise a user of a user device. For
example, a facial representation may be generated for a dog whose
ears, mouth, nose, fur, and eyes are shown in one or more pictures
or videos.
[0107] The various embodiments disclosed herein can be implemented
as hardware, firmware, software, or any combination thereof.
Moreover, the software is preferably implemented as an application
program tangibly embodied on a program storage unit or computer
readable medium consisting of parts, or of certain devices and/or a
combination of devices. The application program may be uploaded to,
and executed by, a machine comprising any suitable architecture.
Preferably, the machine is implemented on a computer platform
having hardware such as one or more central processing units
("CPUs"), a memory, and input/output interfaces. The computer
platform may also include an operating system and microinstruction
code. The various processes and functions described herein may be
either part of the microinstruction code or part of the application
program, or any combination thereof, which may be executed by a
CPU, whether or not such a computer or processor is explicitly
shown. In addition, various other peripheral units may be connected
to the computer platform such as an additional data storage unit
and a printing unit. Furthermore, a non-transitory computer
readable medium is any computer readable medium except for a
transitory propagating signal.
[0108] All examples and conditional language recited herein are
intended for pedagogical purposes to aid the reader in
understanding the disclosed embodiments and the concepts
contributed by the inventor to furthering the art, and are to be
construed as being without limitation to such specifically recited
examples and conditions. Moreover, all statements herein reciting
principles, aspects, and embodiments of the invention, as well as
specific examples thereof, are intended to encompass both
structural and functional equivalents thereof. Additionally, it is
intended that such equivalents include both currently known
equivalents as well as equivalents developed in the future, i.e.,
any elements developed that perform the same function, regardless
of structure.
* * * * *