U.S. patent application number 13/332248 was filed with the patent office on 2012-04-19 for generating search requests from multimodal queries.
This patent application is currently assigned to Microsoft Corporation. Invention is credited to Xin Fan, Ming Jing Li, Zhiwei Li, Wei-Ying Ma, Xing Xie.
Application Number | 20120093371 13/332248 |
Document ID | / |
Family ID | 37885443 |
Filed Date | 2012-04-19 |
United States Patent
Application |
20120093371 |
Kind Code |
A1 |
Li; Ming Jing ; et
al. |
April 19, 2012 |
GENERATING SEARCH REQUESTS FROM MULTIMODAL QUERIES
Abstract
A method and system for generating a search request from a
multimodal query that includes a query image and query text is
provided. The multimodal query system identifies images of a
collection that are textually related to the query image based on
similarity between words associated with each image and the query
text. The multimodal query system then selects those images of the
identified images that are visually related to the query image. The
multimodal query system may formulate a search request based on
keywords of web pages that contain the selected images and submit
that search request to a search engine service.
Inventors: |
Li; Ming Jing; (Beijing,
CN) ; Ma; Wei-Ying; (Beijing, CN) ; Xie;
Xing; (Beijing, CN) ; Fan; Xin; (Hefei,
CN) ; Li; Zhiwei; (Beijing, CN) |
Assignee: |
Microsoft Corporation
Redmond
WA
|
Family ID: |
37885443 |
Appl. No.: |
13/332248 |
Filed: |
December 20, 2011 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
12247958 |
Oct 8, 2008 |
8081824 |
|
|
13332248 |
|
|
|
|
11233352 |
Sep 21, 2005 |
7457825 |
|
|
12247958 |
|
|
|
|
Current U.S.
Class: |
382/106 ;
382/165; 382/305 |
Current CPC
Class: |
Y10S 707/99948 20130101;
G06F 16/951 20190101; Y10S 707/99945 20130101 |
Class at
Publication: |
382/106 ;
382/305; 382/165 |
International
Class: |
G06K 9/00 20060101
G06K009/00; G06K 9/54 20060101 G06K009/54 |
Claims
1. A method in a device for generating a search request for a
multimodal query with a query image and query text, the query image
being stored in electronic form, the method comprising: providing
access to a collection of images and associated words; receiving a
multimodal query that includes a query image and query text;
identifying images of the collection based on textual relatedness
between a word associated with an image and the query text;
selecting images of the identified images based on visual
relatedness between an identified image and the query image;
generating a search request based on keywords associated with the
selected images; submitting the generated search request to a
search engine for identifying documents related to the multimodal
query; and providing an indication of the identified documents as a
search result for the multimodal query.
2. The method of claim 1 wherein the selecting comprises extracting
a feature vector for the identified image, determining the distance
between the extracted feature vector and the feature vector of each
image of the collection, and selecting the images based on the
determined distance.
3. The method of claim 1 wherein the collection includes a
collection of web pages with images and words.
4. The method of claim 1 wherein visual relatedness is based on
similarity in color space and wavelet coefficients.
5. The method of claim 1 including: before identifying images of
the collection, determining whether the query image is a duplicate
of an image of the collection; and when the query image is a
duplicate of an image, generating a search request based on a
keyword associated with that image.
6. The method of claim 5 wherein the query image is a duplicate
when the images are identical.
7. The method of claim 5 wherein the query image is a duplicate
when the images are of the same content but from different points
of view.
8. The method of claim 5 wherein the collection includes signatures
of the images and wherein images are duplicates when they have the
same signature.
9. The method of claim 1 wherein the query text is derived from
audio information.
10. A computer-readable storage device containing
computer-executable instructions for controlling a computing device
to find images related to a multimodal query, the instructions for
performing a method comprising: providing access to web pages with
images, the web pages having words; receiving a query image and
query text of the multimodal query; identifying images of the web
pages based on textual relatedness between words of a web page and
the query text; selecting images of the identified images of the
web pages based on visual relatedness between an identified image
and the query image; and generating a search request based on
keywords associated with the selected images.
11. The computer-readable storage device of claim 10 including:
submitting the generated search request to a search engine for
identifying documents related to the multimodal query; and
providing an indication of the identified documents as a search
result for the multimodal query.
12. The computer-readable storage device of claim 10 wherein the
selecting comprises extracting a feature vector for the query
image, determining the distance between the extracted feature
vector and the feature vector and the feature vector of each image
of the collection, and selecting the images based on the determined
distance.
13. The computer-readable storage device of claim 10 including:
before identifying web pages, determining whether the query image
is a duplicate of an image of a web page; and when the query image
is a duplicate of an image of a web page, generating a search
request based on words of the web page that contains the duplicate
image.
14. The computer-readable storage device of claim 10 wherein visual
relatedness is based on similarity in color space and wavelet
coefficients.
15. The computer-readable storage device of claim 13 wherein the
query text is derived from audio information.
16. A computing device for generating a search request for a
multimodal query with a query image and query text, comprising: a
memory storing computer-executable instructions of: a component
that identifies images of a collection of images based on textual
relatedness between a word associated with an image and the query
text; a component that selects images of the identified images
based on visual relatedness between an identified image and the
query image; and a component that generates a search request based
on keywords associated with the selected images; a processor that
executes the computer-executable instructions stored in the
memory.
17. The computing device of claim 16 including: a component that
submits the generated search request to a search engine for
identifying documents related to the multimodal query; and a
component that provides an indication of the identified documents
as a search result for the multimodal query.
18. The computing device of claim 17 wherein the component that
selects extracts a feature vector for the identified image,
determines the distance between the extracted feature vector and
the feature vector of each image of the collection, and selects the
images based on the determined distance.
19. The computing device of claim 17 including a component that
before identifying images of the collection, determines whether the
query image is a duplicate of an image of the collection and when
the query image is a duplicate of an image, generates a search
request based on a keyword associated with that image.
20. The computing device of claim 17 wherein the collection
includes signatures of the images and wherein images are duplicates
when they have the same signature.
Description
CROSS REFERENCE TO RELATED APPLICATION
[0001] This application is a continuation application of U.S. Pat.
No. 8,081,824, filed on Nov. 30, 2004, and issued on Dec. 20, 2011,
entitled, "GENERATING SEARCH REQUESTS FROM MULTIMODAL QUERIES:
which is a divisional application of U.S. Pat. No. 7,457,825, filed
on Sep. 21, 2005, and issued on Nov. 25, 2008, entitled "GENERATING
SEARCH REQUESTS FROM MULTIMODAL QUERIES," which are incorporated
herein in their entireties by reference.
BACKGROUND
[0002] Many search engine services, such as Google and Overture,
provide for searching for information that is accessible via the
Internet. These search engine services allow users to search for
display pages, such as web pages, that may be of interest to users.
After a user submits a search request or query that includes search
terms, the search engine service identifies web pages that may be
related to those search terms. To quickly identify related web
pages, the search engine services may maintain a mapping of
keywords to web pages. This mapping may be generated by "crawling
and indexing" the web (i.e., the World Wide Web) to identify the
keywords of each web page. To crawl the web, a search engine
service may use a list of root web pages to identify all web pages
that are accessible through those root web pages. The keywords of
any particular web page can be identified using various well-known
information retrieval techniques, such as identifying the words of
a headline, the words supplied in the metadata of the web page, the
words that are highlighted, and so on. The search engine service
then ranks the web pages of the search result based on the
closeness of each match, web page popularity (e.g., Google's
PageRank), and so on. The search engine service may also generate a
relevance score to indicate how relevant the information of the web
page may be to the search request. The search engine service then
displays to the user links to those web pages in an order that is
based on their rankings.
[0003] These search engine services may, however, not be
particularly useful in certain situations. In particular, it can
difficult to formulate a suitable search request that effectively
describes the needed information. For example, if a person sees a
flower on the side of a road and wants to learn the identity of the
flower, the person when returning home may formulate the search
request of "picture of yellow tulip-like flower in Europe" (e.g.,
yellow tulip) in hopes of seeing a picture of the flower.
Unfortunately, the search result may identify so many web pages
that it may be virtually impossible for the person to locate the
correct picture assuming that the person can even accurately
remember the details of the flower. If the person has a mobile
device, such as a personal digital assistant ("PDA") or cell phone,
the person may be able to submit the search request while at the
side of the road. Such mobile devices, however, have limited input
and output capabilities, which make it both difficult to enter the
search request and to view the search result.
[0004] If the person, however, is able to take a picture of the
flower, the person may then be able to use a Content Based
Information Retrieval ("CBIR") system to find a similar looking
picture. Although the detection of duplicate images can be achieved
when the image database of the CBIR system happens to contain a
duplicate image, the image database will not contain a duplicate of
the picture of the flower at the side of the road. If a duplicate
image is not in the database, it can be prohibitively expensive
computationally, if even possible, to find a "matching" image. For
example, if the image database contains an image of a field of
yellow tulips and the picture contains only a single tulip, then
the CBIR system may not recognize the images as matching.
SUMMARY
[0005] A method and system for generating a search request from a
multimodal query is provided. The multimodal query system inputs a
multimodal query that includes a query image and query text. The
multimodal query system provides a collection of images along with
one or more words associated with each image. The multimodal query
system identifies images of the collection that are textually
related to the query image based on similarity between associated
words and the query text. The multimodal query system then selects
those images of the identified images that are visually related to
the query image. The multimodal query system may formulate a search
request based on keywords of the web pages that contain the
selected images and submit that search request to a search engine
service, a dictionary service, an encyclopedia service, or the
like. Upon receiving the search result, the multimodal query system
provides that search result as the search result for the multimodal
query.
[0006] This Summary is provided to introduce a selection of
concepts in a simplified form that are further described below in
the Detailed Description. This Summary is not intended to identify
key features or essential features of the claimed subject matter,
nor is it intended to be used as an aid in determining the scope of
the claimed subject matter.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. 1 is a block diagram that illustrates the overall
processing of the multimodal query system in one embodiment.
[0008] FIG. 2 is a block diagram that illustrates components of the
multimodal query system in one embodiment.
[0009] FIG. 3 is a flow diagram that illustrates the processing of
the perform multimodal query component in one embodiment.
[0010] FIG. 4 is a diagram that illustrates the generating of a
signature of an image in one embodiment.
[0011] FIG. 5 is a flow diagram that illustrates the processing of
the calculate image signature component in one embodiment.
[0012] FIG. 6 is a flow diagram that illustrates the processing of
the find related images component in one embodiment.
[0013] FIG. 7 is a flow diagram that illustrates the processing of
the identify images by textual relatedness component in one
embodiment.
[0014] FIG. 8 is a flow diagram that illustrates the processing of
the select images by visual relatedness component in one
embodiment.
[0015] FIG. 9 is a flow diagram that illustrates the processing of
the create indexes component in one embodiment.
[0016] FIG. 10 is a flow diagram that illustrates the processing of
the generate signature-to-image index component in one
embodiment.
[0017] FIG. 11 is a flow diagram that illustrates the processing of
the generate image-to-related-information index component in one
embodiment.
[0018] FIG. 12 is a flow diagram that illustrates the processing of
the select keywords for web page component in one embodiment.
[0019] FIG. 13 is a flow diagram that illustrates the processing of
the score keywords of web page component in one embodiment.
[0020] FIG. 14 is a flow diagram that illustrates the processing of
the generate word-to-image index in one embodiment.
DETAILED DESCRIPTION
[0021] A method and system for generating a search request from a
multimodal query is provided. In one embodiment, the multimodal
query system inputs a multimodal query that includes an image
(i.e., query image) and verbal information (i.e., query text). For
example, a multimodal query may include a picture of a flower along
with the word "flower." The verbal information may be input as text
via a keyboard, audio via a speaker, and so on. The multimodal
query system provides a collection of images along with one or more
words associated with each image. For example, each image of the
collection may have associated words that describe the subject of
the image. In the case of an image of a yellow tulip, the
associated words may include yellow, tulip, lily, flower, and so
on. The multimodal query system identifies images of the collection
whose associated words are related to the query text. The
identifying of images based on relatedness to the query text helps
to reduce the set of images that may be related to the query image.
The multimodal query system then selects those images of the
identified images that are visually related to the query image. For
example, the multimodal query system may use a content base
information retrieval ("CBIR") system to determine which of the
identified images are most visually similar to the query image. In
one embodiment, the multimodal query system may return the selected
images as the search result. For example, the multimodal query
system may provide links to web pages that contain the selected
images. In another embodiment, the multimodal query system may
formulate a search request based on keywords of the web pages that
contain the selected images and submit that search request to a
search engine service, a dictionary service, an encyclopedia
service, or the like. For example, the keywords of the web pages
that contain the selected images may include the phrases yellow
tulip, tulipa, Liliaceae lily flower, Holland yellow flower, and so
on, and the formulated search request may be "yellow tulip lily
flower Holland." Upon receiving the search result, the multimodal
query system provides that search result as the search result for
the multimodal query. In this way, the multimodal query system
allows the multimodal query to specify needed information more
precisely than is specified by a unimodal query (e.g., query image
alone or query text alone).
[0022] In one embodiment, the multimodal query system may generate
from the collection of images a word-to-image index for use in
identifying the images that are related to the query text. The
word-to-image index maps images to their associated words. For
example, the words tulip, flower, and yellow may map to the image
of a field of yellow tulips. The multimodal query system may
generate the collection of images from a collection of web pages
that each contain one or more images. The multimodal query system
may assign a unique image identifier to each image of a web page.
The multimodal query system may then identify words associated with
the image. For each associated word, the multimodal query system
adds an entry that maps the word to the image identifier. The
multimodal query system uses these entries when identifying images
that are related to the query text. The multimodal query system may
use conventional techniques to identify the images that are most
textually related to the query text based on analysis of the
associated words.
[0023] In one embodiment, the multimodal query system may generate
from the collection of images an image-to-related-information index
for use in selecting the identified images that are visually
related to the query image. The image-to-related-information index
may map each image to a visual feature vector of the image, a
bitmap of the image, a web page that contains the image, and
keywords of the web page that are associated with the image. For
each image, the multimodal query system generates a visual feature
vector of features (e.g., average RGB value) that represents the
image. When determining whether an image of the collection is
visually related to a query image, the multimodal query system
generates a visual feature vector for the query image and compares
it to the visual feature vector of the image-to-related-information
index. The multimodal query system may identify, from the web page
that contains an image, keywords associated with the image and
store an indication of those keywords in the
image-to-related-information index. The multimodal query system
uses the keywords associated with the selected images to formulate
a unimodal or text-based search request for the multimodal
query.
[0024] In one embodiment, the multimodal query system may initially
search the collection of images to determine whether there is a
duplicate image. If a duplicate image is found, then the multimodal
query system may use the keywords associated with that image (e.g.,
from the image-to-related-information index) to formulate a search
request based on the multimodal query. If no duplicate image is
found, then the multimodal query system uses the query text to
identify images and then selects from those identified images that
are textually and visually related to the query image as described
above. The multimodal query system may generate a
signature-to-image index for identifying duplicate images by
comparing signatures of the images of the collection to the
signature of a query image. The multimodal query system may use
various hashing algorithms to map an image to a signature that has
a relatively high likelihood of being unique to that image within
the collection (i.e., no collisions). To identify duplicate images,
the multimodal query system generates a signature for the query
image and determines whether the signature-to-image index contains
an entry with the same signature.
[0025] FIG. 1 is a block diagram that illustrates the overall
processing of the multimodal query system in one embodiment. The
input to the multimodal query system is a multimodal query that
includes a query image 101 and a query text 102. The system
initially generates a signature for the query image. In decision
block 103, if the signature-to-image index contains an entry with a
matching signature, then the collection contains a duplicate image
and the system continues at block 106, else the system continues at
block 104. In block 104, the system identifies images that are
textually related to the query text using the word-to-image index.
Before identifying the images, the system may use various
techniques to expand the query text, such as by adding to the query
text synonyms of the original words of the query text. The use of
the expanded query text may help improve the chances of identifying
the most textually related images. The output of block 104 is the
identified images that are most textually related. In block 105,
the system selects from the identified textually related images
those images that are visually related to the query image using the
image-to-related-information index. The system may determine the
visual distance between the visual feature vector of the query
image and the visual feature vector of each image of the collection
and select the images with the smallest visual distances as being
most visually related. The output of block 105 is the selected
visually related images. In block 107, the system formulates a
search request based on the images selected in block 105 or on the
duplicate image as identified in block 103. The system retrieves
the keywords associated with the selected images or the duplicate
image and generates a text-based search request from those
keywords. In block 107, the system submits the search request to a
search engine service, a dictionary service, an encyclopedia
service, or the like. The system then returns the search result
provided by the search engine as the search result for the
multimodal query.
[0026] FIG. 2 is a block diagram that illustrates components of the
multimodal query system in one embodiment. The multimodal query
system includes an indexing component 210, a data store 220, and a
querying component 230. The indexing component generates the
indexes from a collection of web pages. The indexing component
includes a generate indexes component 211, a generate
signature-to-image index component 212, a generate
image-to-related-information index component 213, and a generate
word-to-image index component 214. The generate indexes component
invokes the other components of the indexing component to generate
the appropriate index. The data store 220 includes a web page store
221, a signature-to-image index 222, an
image-to-related-information index 223, and a word-to-image index
224. The web page store contains a collection of web pages from
which the indexes are generated. The index may be organized using
various data structures such as hash tables, B-trees, ordered list,
so on. In addition, the indexes may be represented by a single data
structure or separate data structures. The querying component 230
includes a perform multimodal query component 231, a calculate
image signature component 232, a find related images component 233,
an identify images by textual relatedness component 234, and a
select images by visual relatedness component 235. The perform
multimodal query component is invoked to perform a multimodal query
on an input query image and an input query text. The component
invokes the calculate image signature component to generate a
signature for the query image for use in determining whether the
collection of images contains a duplicate of the query image. The
component also invokes the find related images component to find
images that are related when no duplicate image has been found. The
find related images component invokes the identify images by
textual relatedness component and the select images by visual
relatedness component to find the related images. The perform
multimodal query component then formulates a text-based search
request based on the keywords associated with the related images
and submits the search request to a search engine service, a
dictionary service, an encyclopedia service, or the like to
generate the search result for the multimodal query.
[0027] The computing devices on which the multimodal query system
may be implemented may include a central processing unit, memory,
input devices (e.g., keyboard and pointing devices), output devices
(e.g., display devices), and storage devices (e.g., disk drives).
The memory and storage devices are computer-readable media that may
contain instructions that implement the multimodal query system. In
addition, the data structures may be stored or transmitted via a
data transmission medium, such as a signal on a communications
link. Various communications links may be used to connect
components of the system, such as the Internet, a local area
network, a wide area network, a point-to-point dial-up connection,
a cell phone network, and so on.
[0028] Embodiments of the multimodal query system may be
implemented in various operating environments that include personal
computers, server computers, hand-held or laptop devices,
multiprocessor systems, microprocessor-based systems, programmable
consumer electronics, digital cameras, network PCs, minicomputers,
mainframe computers, distributed computing environments that
include any of the above systems or devices, and so on. The devices
may include cell phones, personal digital assistants, smart phones,
personal computers, programmable consumer electronics, digital
cameras, and so on.
[0029] The multimodal query system may be described in the general
context of computer-executable instructions, such as program
modules, executed by one or more computers or other devices.
Generally, program modules include routines, programs, objects,
components, data structures, and so on that perform particular
tasks or implement particular abstract data types. Typically, the
functionality of the program modules may be combined or distributed
as desired in various embodiments.
[0030] FIG. 3 is a flow diagram that illustrates the processing of
the perform multimodal query component in one embodiment. The
component is passed a multimodal query that includes a query image
and a query text. The query text includes one or more query words.
In block 301, the component invokes the calculate image signature
component to calculate the signature of the query image. In
decision block 302, if the signature-to-image index contains the
calculated signature, then a duplicate image has been found and the
component continues at block 305, else the component continues at
block 303. In decision block 303, if query text was provided (e.g.,
not blank), then the component continues at block 304, else the
component completes without performing the search. In block 304,
the component invokes the find related images component to find
images related to the query image. In block 305, the component
extracts keywords from the image-to-related-information index for
the related images or the duplicate image. In block 306, the
component formulates a search request based on the keywords and
submits the search request to a search engine. The component then
completes.
[0031] FIG. 4 is a diagram that illustrates the generating of a
signature of an image in one embodiment. Image 401 represents the
image for which the signature is to be generated. Initially, the
system converts the image to a gray level image as represented by
image 402. The system then divides the image into blocks (e.g.,
8-by-8 blocks) as illustrated by image 403. The system then
calculates the average intensity of each block to generate matrix
404 as indicated by the following equation:
I ij = x = 0 w - 1 y = 0 h - 1 Int ( x , y ) w h , i = 0 , 6 , 7 ,
j = 0 , 6 , 7 ( 1 ) ##EQU00001##
where I.sub.ij is the average intensity for block ij and x and y
represent the pixels of block ij. The system then performs a
two-dimensional discrete cosine transform ("DCT") on the matrix.
The system discards the DC coefficient of the DCT matrix and
selects 48 AC coefficients of the DCT matrix in a zigzag pattern as
illustrated by pattern 405 resulting in an AC coefficients vector
406. The system then performs a principal component analysis
("PCA") to generate a 32-dimension feature vector 407 as
illustrated by the following equation:
Y.sub.n=P.sup.TA.sub.m (2)
where Y.sub.n represents the 32-dimension feature vector, A.sub.m
represents the 48 AC coefficients, and P represents an m.times.n
transform matrix whose columns are the n orthonormal eigenvectors
corresponding to the first n largest eigenvalues of the covariance
matrix .SIGMA..sub.A.sub.m, and P.sup.TP=I.sub.n. The system may
train the transform matrix using a collection of sample web pages.
Finally, the system generates a 32-bit hash value 408 from the
32-dimension feature vector by setting the value of each of the 32
bits to 1 if the corresponding 32-dimension feature vector is
greater than 0, and to 0 otherwise. One skilled in the art will
appreciate that many different algorithms may be used to generate a
signature for an image.
[0032] FIG. 5 is a flow diagram that illustrates the processing of
the calculate image signature component in one embodiment. The
component is passed an image and generates a signature for the
image. In block 501, the component converts the image into a gray
level image. In block 502, the component divides the image into
blocks. In block 503, the component calculates the average
intensity of each block to generate an intensity matrix. In block
504, the component performs a two-dimensional discrete cosine
transform on the intensity matrix. In block 505, the component
extracts 48 AC coefficients from the DCT matrix. In block 506, the
component performs a PCA to generate a 32-dimension feature vector
from the 48 AC coefficients. In block 507, the component generates
a 32-bit signature from the 32-dimension feature vector and then
completes.
[0033] FIG. 6 is a flow diagram that illustrates the processing of
the find related images component in one embodiment. The component
is passed a multimodal query containing a query image and a query
text and returns an indication of images that are related to the
multimodal query. In block 601, the component invokes the identify
images by textual relatedness component to identify images that are
related to the query text. In block 602, the component invokes the
select images by visual relatedness component to select those
identified images that are visually related to the query image. The
component then returns the identifiers of the selected images as
the related images.
[0034] FIG. 7 is a flow diagram that illustrates the processing of
the identify images by textual relatedness component in one
embodiment. The component is passed a query text and returns the
identification of images that are related to the query text as
indicated by the word-to-image index. In block 701, the component
removes stop words (e.g., a, the, and an) from the query text. In
block 702, the component applies stemming rules, such as the Porter
stemming algorithm, to generate the stems for the words of the
query text. For example, the words flowers, flowering, and flowered
may be transformed to their stem flower. In block 703, the
component expands the words of the query text to include synonyms
and hyponyms using, for example, the Wordnet system. For example,
the word flower may be expanded to include bloom, blossom, heyday,
efflorescence, flush, peony, lesser celandine, pilewort, Ranunculus
ficaria, anemone, wildflower, and so on. In block 704, the
component searches the word-to-image index to locate images with
associated words that are related to the expanded query text. In
block 705, the component ranks the images based on how well the
associated words match the expanded query text. In block 706, the
component identifies the highest ranking images and returns the
identified images. The component may treat words associated with
each image as a document and use standard query techniques to find
the documents that are most closely related to the expanded query
text.
[0035] FIG. 8 is a flow diagram that illustrates the processing of
the select images by visual relatedness component in one
embodiment. The component is passed the query image and an
indication of the images that were identified based on textual
relatedness. The component selects those identified images that are
visually related to the query image. In block 801, the component
extracts a feature vector for the query image. In one embodiment,
the feature vector for an image includes three features: a
64-element RGB color histogram feature, a 64-element HSV color
histogram feature, and a 192-element Daubechies' wavelet
coefficient feature. One skilled in the art will appreciate that
any of a variety of well-known techniques can be used to generate a
feature vector for an image. In blocks 802-807, the component loops
determining the distance between the feature vector of the query
image and the feature vector of each image of the collection. In
block 802, the component selects the next image. In decision block
803, if all the images have already been selected, then the
component continues at block 808, else the component continues at
block 804. In block 804, the component calculates the RGB distance
between the selected image and the query image. In block 805, the
component calculates the HSV distance between the selected image
and the query image. In block 806, the component calculates the
Daubechies' distance between the selected image and the query
image. In block 807, the component calculates the normalized
distance between the selected image and the query image as
represented by the following equation:
D.sub.i=w.sub.RGB(.parallel.F.sub.RGB.sup.query-F.sub.RGB.sup.j.parallel-
..sub.1)+w.sub.HSV(.parallel.F.sub.HSV.sup.query-F.sub.HSV.sup.j.parallel.-
.sub.1)+w.sub.Daub(.parallel.F.sub.Daub.sup.query-F.sub.Daub.sup.j.paralle-
l..sub.1), j=1, . . . , M (3)
where F.sub.RGB.sup.query, F.sub.HSV.sup.query, and
F.sub.Daub.sup.query are the feature vectors of the query image and
F.sub.RGB.sup.j, F.sup.j, and F.sub.Daub.sup.j are the feature
vectors of the selected image, and is a normalization operator. In
one embodiment, the component uses the constant weights of
w.sub.RGB=0.3, w.sub.HSV=0.5, and w.sub.Daub=0.2. The component
then loops to block 802 to select the next image. In block 808, the
component selects the images with the smallest distances and
returns the selected images.
[0036] FIGS. 9-14 are flow diagrams that illustrate the creation of
the indexes. FIG. 9 is a flow diagram that illustrates the
processing of the create indexes component in one embodiment. The
component is passed a collection of web pages that contain images.
In block 901, the component invokes the generate signature-to-image
index component. In block 902, the component invokes the generate
image-to-related-information index component. In block 903, the
component invokes the generate word-to-image index component. The
component then completes.
[0037] FIG. 10 is a flow diagram that illustrates the processing of
the generate signature-to-image index component in one embodiment.
The component is passed the images of the web pages and calculates
a signature for each image and stores a mapping of that signature
to the image. In block 1001, the component selects the next image
and assigns to it a unique image identifier. In decision block
1002, if all the images have already been selected, then the
component returns, else the component continues at block 1003. In
block 1003, the component invokes the calculate image signature
component to calculate the signature for the selected image. In
block 1004, the component stores an entry in the signature-to-image
index that maps the signature to the image identifier and then
loops to block 1001 to select the next image.
[0038] FIG. 11 is a flow diagram that illustrates the processing of
the generate image-to-related-information index component in one
embodiment. The component is passed a collection of web pages and
generates a mapping of the images of the web pages to the
corresponding keywords. In blocks 1101-1103, the component loops
selecting each web page and image combination and identifies the
keywords for the image of the web page (i.e., a web page can have
multiple images). In block 1101, the component selects the next web
page and image combination. In decision block 1102, if all the web
page and image combinations have already been selected, then the
component continues at block 1104, else the component continues at
block 1103. In block 1103, the component invokes the select
keywords for web page component and then loops to block 1101 to
select the next web page and image combination. In blocks
1104-1108, the component loops selecting the highest scored
keywords of each web page for each image. In block 1104, the
component selects the next web page and image combination. In
decision block 1105, if all the web page and image combinations
have already been selected, the component returns, else the
component continues at block 1106. In block 1106, the component
invokes the score keywords of web page component. In block 1107,
the component selects the highest scored keywords. In block 1108,
the component stores an entry in the image-to-related-information
index that maps the image identifier of the image to the keywords.
In block 1109, the component stores other related information, such
as the visual feature vector for the image and the identification
of the web page, in the entry of the image-to-related-information
index. The component then loops to block 1104 to select the next
web page and image combination.
[0039] FIG. 12 is a flow diagram that illustrates the processing of
the select keywords for web page component in one embodiment. The
component is passed a web page along with the identification of an
image. The component identifies from that web page the keywords
associated with the image. In block 1201, the component creates a
document from the text of the web page that is related to the
image. For example, the component may analyze the document object
model ("DOM") representation of the web page to identify text that
surrounds the image. In block 1202, the component identifies
phrases within the document such as all possible phrases of length
four or less. In block 1203, the component removes non-boundary
words (e.g., "a," "the," "to") from the ends of the phrases. In
block 1204, the component removes stop words from the phrases. In
block 1205, the component counts the number of occurrences of each
phrase within the document. The component then returns the phrases
as the keywords.
[0040] FIG. 13 is a flow diagram that illustrates the processing of
the score keywords of web page component in one embodiment. The
component is passed a web page, an image of the web pages, and
keywords and scores the importance of each keyword to the image.
The component uses a term frequency by inverse document frequency
("TF-IDF") score for each word of the collection of web pages. The
component may calculate the term frequency by inverse document
frequency score according to the following equation:
tf - idf i = n id n d log N n i ( 4 ) ##EQU00002##
where tf-idf.sub.i represents the score for word i, n.sub.id
represents the number of occurrences of a word i on web page d,
n.sub.d represents the total number of words on web page d, n.sub.i
represents the number of pages that contains word i, and N
represents the number of web pages in the collection of web pages.
In blocks 1301-1307, the component loops calculating a score for
each phrase of the document. In block 1301, the component selects
the next keyword, which can contain a single word or multiple
words. In decision block 1302, if all the keywords have already
been selected, then the component returns the score for the
keyword, else the component continues at block 1303. In block 1303,
the component calculates a mutual information score of the selected
keyword as represented by the following equation:
MI ( P ) = log ( Occu ( P ) ) .times. N ( P ) Occu ( prefix ( P ) )
.times. Occu ( suffix ( P ) ) ( 5 ) ##EQU00003##
where MI(P) represents the mutual information score for keyword P,
Occu(P) represents the count of occurrences of P on the web page,
|P| represents the number of words P contains, N(|P|) represents
the total number of keywords (i.e., phrases) with length less than
|P|, prefix(P) represents the prefix of P with length |P|-1, and
suffix(P) represents the suffix of P with length |P|-1. In decision
block 1304, if the mutual information score is greater than a
threshold, then the component continues at block 1305, else the
component loops to block 1301 to select the next keyword. If the
mutual information score does not meet a threshold level, then the
component considers the keyword to be unimportant and sets its
score to 0. In block 1305, the component calculates the TF-IDF
score for the selected keyword as the average of the TF-IDF score
for the words of the keyword. In block 1306, the component
calculates a visualization style score ("VSS") to factor in the
visual characteristics of the keyword as represented by the
following equation:
VSS ( P ) = { tf - idf Max , if P is in title , alt text or meta ;
1 4 tf - idf Max , else if P is in bold ; 1 8 tf - idf Max , else
if P is in a large font ; 0 , otherwise . ( 6 ) ##EQU00004##
where VSS(P) represents the VSS score for the keyword P and
tf-idf.sub.max represents the maximum TF-IDF score of all keywords
of the web page. The VSS is based on whether the keyword is in the
title or in metadata and whether the keyword is in bold or in a
large font. One skilled in the art will appreciate that other
visual characteristics could be taken into consideration, such as
position of a keyword on a page, closeness to an image, and so on.
In block 1307, the component calculates a combined score for the
selected keyword according to the following equation:
Y = b 0 + j = 1 3 b j X j ( 7 ) ##EQU00005##
where X={tf-idf, Mi, VSS} and the coefficients b.sub.0, . . . ,
b.sub.3 are empirically determined. The component then loops to
block 1301 to select the next keyword.
[0041] FIG. 14 is a flow diagram that illustrates the processing of
the generate word-to-image index in one embodiment. The component
may input the image-to-related-information index and add an entry
to the word-to-image index for each word of each keyword for each
image. In block 1401, the component selects the next image. In
decision block 1402, if all the images have already been selected,
then the component returns, else the component continues at block
1403. In block 1403, the component selects the next keyword for the
selected image. In decision block 1404, if all the keywords have
already been selected, then the component loops to block 1401 to
select the next image, else the component continues at block 1405.
In block 1405, the component selects the next word of the selected
keyword. In decision block 1406, if all the words of the selected
keyword have already been selected, then the component loops to
block 1403 to select the next keyword of the selected image, else
the component continues at block 1407. In block 1407, the component
adds an entry to the word-to-image index that maps the selected
words to the image identifier of the selected image. The component
then loops to block 1405 to select the next word of the selected
keyword.
[0042] Although the subject matter has been described in language
specific to structural features and/or methodological acts, it is
to be understood that the subject matter defined in the appended
claims is not necessarily limited to the specific features or acts
described above. Rather, the specific features and acts described
above are disclosed as example forms of implementing the claims.
For example, the multimodal query system may consider images to be
duplicates when they are identical and when they are of the same
content but from different points of view. An example of different
points of view would be pictures of the same building from
different angles or different distances. As used herein, the term
"keyword" refers to a phrase of one or more words. For example,
"yellow tulips" and "tulips" are both keywords. Accordingly, the
invention is not limited except as by the appended claims.
[0043] From the foregoing, it will be appreciated that specific
embodiments of the invention have been described herein for
purposes of illustration, but that various modifications may be
made without deviating from the spirit and scope of the invention.
Accordingly, the invention is not limited except as by the appended
claims.
* * * * *