U.S. patent application number 15/420012 was filed with the patent office on 2017-09-14 for computer-implemented system and method for forming document clusters for display.
The applicant listed for this patent is FTI Technology LLC. Invention is credited to Dan Gallivan.
Application Number | 20170262526 15/420012 |
Document ID | / |
Family ID | 32851381 |
Filed Date | 2017-09-14 |
United States Patent
Application |
20170262526 |
Kind Code |
A1 |
Gallivan; Dan |
September 14, 2017 |
Computer-Implemented System And Method For Forming Document
Clusters For Display
Abstract
A computer-implemented system and method for forming document
clusters for display is provided. Concepts are identified from a
set of documents and a subset of the documents that include those
concepts with frequencies of occurrence that occur within a range
of concept frequencies are selected from the set. The documents in
the subset are assigned to clusters. Each cluster includes a center
and a radius, and is placed into a display with the center of that
cluster at a fixed distance from the common origin. A portion of
the placed clusters are further placed along a common vector.
Inventors: |
Gallivan; Dan; (Bainbridge
Island, WA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
FTI Technology LLC |
Annapolis |
MD |
US |
|
|
Family ID: |
32851381 |
Appl. No.: |
15/420012 |
Filed: |
January 30, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14961845 |
Dec 7, 2015 |
9558259 |
|
|
15420012 |
|
|
|
|
14174800 |
Feb 6, 2014 |
9208221 |
|
|
14961845 |
|
|
|
|
13831565 |
Mar 14, 2013 |
8650190 |
|
|
14174800 |
|
|
|
|
10911376 |
Aug 3, 2004 |
8402026 |
|
|
13831565 |
|
|
|
|
09943918 |
Aug 31, 2001 |
6778995 |
|
|
10911376 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
Y10S 707/99943 20130101;
G06F 16/951 20190101; G06F 16/9535 20190101; G06F 16/93 20190101;
G06F 16/287 20190101; G06F 16/355 20190101; G06F 16/283
20190101 |
International
Class: |
G06F 17/30 20060101
G06F017/30 |
Claims
1. A computer-implemented system for forming document clusters for
display, comprising: a set of documents comprising concepts; a
server comprising memory, a central processing unit, an input port
to receive the document set, and an output port, wherein the
central processing unit is configured to: select a subset of the
documents from the set that include those concepts with frequencies
of occurrence that occur within a range of concept frequencies;
assign the documents in the subset to clusters, each cluster
comprising a center and a radius; and place each cluster into a
display with the center of that cluster at a fixed distance from
the common origin, wherein a portion of the placed clusters are
further placed along a common vector.
2. A system according to claim 1, wherein the central processing
unit is further configured to apply tighter ranges of concept
frequencies to larger documents than to shorter documents.
3. A system according to claim 1, wherein the central processing
unit is further configured to form the clusters by creating an
initial cluster and adding additional clusters.
4. A system according to claim 1, wherein the central processing
unit is further configured to: select a document from the subset
and determine a location of the document from the common origin;
determine a location of one of the clusters from the common origin;
calculate a difference between the location of the document and the
location of the cluster; and apply a predetermined threshold to the
calculated difference.
5. A system according to claim 4, wherein the central processing
unit is further configured to place the selected document in the
cluster when the difference is below the predetermined
threshold.
6. A system according to claim 4, wherein the central processing
unit is further configured to select a further cluster for
comparison with the document when the difference is above the
predetermined threshold.
7. A system according to claim 6, wherein the central processing
unit is further configured to create a new cluster when the
difference between the document and each cluster is above the
predetermined threshold.
8. A system according to claim 1, wherein the central processing
unit is further configured to: merge two or more of the clusters
into a single cluster; split one of the clusters into two or more
clusters, and remove outlier clusters.
9. A system according to claim 1, wherein the radius of each
cluster reflects a relative number of the documents included in
that cluster.
10. A system according to claim 1, wherein each cluster comprises a
center of mass and defines a convex volume.
11. A computer-implemented method for forming document clusters for
display, comprising: identifying concepts from a set of documents;
selecting a subset of the documents from the set that include those
concepts with frequencies of occurrence that occur within a range
of concept frequencies; assigning the documents in the subset to
clusters, each cluster comprising a center and a radius; and
placing each cluster into a display with the center of that cluster
at a fixed distance from the common origin, wherein a portion of
the placed clusters are further placed along a common vector.
12. A method according to claim 11, further comprising: applying
tighter ranges of concept frequencies to larger documents than to
shorter documents.
13. A method according to claim 11, further comprising: forming the
clusters, comprising: creating an initial cluster; and adding
additional clusters.
14. A method according to claim 11, further comprising: selecting a
document from the subset and determining a location of the document
from the common origin; determining a location of one of the
clusters from the common origin; calculating a difference between
the location of the document and the location of the cluster; and
applying a predetermined threshold to the calculated
difference.
15. A method according to claim 14, further comprising: when the
difference is below the predetermined threshold, placing the
selected document in the cluster.
16. A method according to claim 14, further comprising: when the
difference is above the predetermined threshold, selecting a
further cluster for comparison with the document.
17. A method according to claim 16, further comprising: when the
difference between the document and each cluster is above the
predetermined threshold, creating a new cluster.
18. A method according to claim 11, further comprising: merging two
or more of the clusters into a single cluster; splitting one of the
clusters into two or more clusters, and removing outlier
clusters.
19. A method according to claim 11, wherein the radius of each
cluster reflects a relative number of the documents included in
that cluster.
20. A method according to claim 11, wherein each cluster comprises
a center of mass and defines a convex volume.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This non-provisional patent application is a continuation of
U.S. patent application Ser. No. 14/961,845, filed Dec. 7, 2015,
pending, which is a continuation of U.S. Pat. No. 9,208,221, issued
Dec. 8, 2015, which is a continuation of U.S. Pat. No. 8,650,190,
issued Feb. 11, 2014, which is a continuation of U.S. Pat. No.
8,402,026, issued Mar. 19, 2013, which is a continuation of U.S.
Pat. No. 6,778,995, issued Aug. 17, 2004, the priority dates of
which are claimed and the disclosures of which are incorporated by
reference.
FIELD OF THE INVENTION
[0002] The present invention relates in general to text mining and,
in particular, to a computer-implemented system and method for
forming document clusters for display.
BACKGROUND OF THE INVENTION
[0003] Document warehousing extends data warehousing to content
mining and retrieval. Document warehousing attempts to extract
semantic information from collections of unstructured documents to
provide conceptual information with a high degree of precision and
recall. Documents in a document warehouse share several properties.
First, the documents lack a common structure or shared type.
Second, semantically-related documents are integrated through text
mining. Third, essential document features are extracted and
explicitly stored as part of the document warehouse. Finally,
documents are often retrieved from multiple and disparate sources,
such as over the Internet or as electronic messages.
[0004] Document warehouses are built in stages to deal with a wide
range of information sources. First, document sources are
identified and documents are retrieved into a repository. For
example, the document sources could be electronic messaging folders
or Web content retrieved over the Internet. Once retrieved, the
documents are pre-processed to format and regularize the
information into a consistent manner. Next, during text analysis,
text mining is performed to extract semantic content, including
identifying dominant themes, extracting key features and
summarizing the content. Finally, metadata is compiled from the
semantic context to explicate essential attributes. Preferably, the
metadata is provided in a format amenable to normalized queries,
such as database management tools. Document warehousing is
described in D. Sullivan, "Document Warehousing and Text Mining,
Techniques for Improving Business Operations, Marketing, and
Sales," Chs. 1-3, Wiley Computer Publishing (2001), the disclosure
of which is incorporated by reference.
[0005] Text mining is at the core of the data warehousing process.
Text mining involves the compiling, organizing and analyzing of
document collections to support the delivery of targeted types of
information and to discover relationships between relevant facts.
However, identifying relevant content can be difficult. First,
extracting relevant content requires a high degree of precision and
recall. Precision is the measure of how well the documents returned
in response to a query actually address the query criteria. Recall
is the measure of what should have been returned by the query.
Typically, the broader and less structured the documents, the lower
the degree of precision and recall. Second, analyzing an
unstructured document collection without the benefit of a priori
knowledge in the form of keywords and indices can present a
potentially intractable problem space. Finally, synonymy and
polysemy can cloud and confuse extracted content. Synonymy refers
to multiple words having the same meaning and polysemy refers to a
single word with multiple meanings. Fine-grained text mining must
reconcile synonymy and polysemy to yield meaningful results.
[0006] In particular, the transition from syntactic to semantic
content analysis requires a shift in focus from the grammatical
level to the meta level. At a syntactic level, documents are viewed
structurally as sentences comprising individual terms and phrases.
In contrast, at a semantic level, documents are viewed in terms of
meaning. Terms and phrases are grouped into clusters representing
individual concepts and themes.
[0007] Data clustering allows the concepts and themes to be
developed more fully based on the extracted syntactic information.
A balanced set of clusters reflects terms and phrases from every
document in a document set. Each document may be included in one or
more clusters. Conversely, concepts and themes are preferably
distributed over a meaningful range of clusters.
[0008] Creating an initial set of clusters from a document set is
crucial to properly visualizing the semantic content. Generally, a
priori knowledge of semantic content is unavailable when forming
clusters from unstructured documents. The difficulty of creating an
initial clusters set is compounded when evaluating different types
of documents, such as electronic mail (email) and word processing
documents, particularly when included in the same document set.
[0009] In the prior art, several data clustering techniques are
known. Exhaustive matching techniques fit each document into one of
a pre-defined and fixed number of clusters using a closest-fit
approach. However, this approach forces an arbitrary number of
clusters onto a document set and can skew the meaning of the
semantic content mined from the document set.
[0010] A related prior art clustering technique performs gap
analysis in lieu of exhaustive matching. Gaps in the fit of points
of data between successive passes are merged if necessary to form
groups of documents into clusters. However, gap analysis is
computational inefficient, as multiple passes through a data set
are necessary to effectively find a settled set of clusters.
[0011] Therefore, there is a need for an approach to forming
clusters of concepts and themes into groupings of classes with
shared semantic meanings. Such an approach would preferably
categorize concepts mined from a document set into clusters defined
within a pre-specified range of variance. Moreover, such an
approach would not require a priori knowledge of the data
content.
SUMMARY OF THE INVENTION
[0012] The present invention provides a system and method for
generating logical clusters of documents in a multi-dimensional
concept space for modeling semantic meaning. Each document in a set
of unstructured documents is first analyzed for syntactic content
by extracting literal terms and phrases. The semantic content is
then determined by modeling the extracted terms and phrases in
multiple dimensions. Histograms of the frequency of occurrences of
the terms and phrases in each document and over the entire document
set are generated. Related documents are identified by finding
highly correlated term and phrase pairings. These pairings are then
used to calculate Euclidean distances between individual documents.
Those documents corresponding to concepts separated by a Euclidean
distance falling within a predetermined variance are grouped into
clusters by k-means clustering. The remaining documents are grouped
into new clusters. The clusters can be used to visualize the
semantic content.
[0013] An embodiment provides a computer-implemented system and
method for forming document clusters for display. Concepts are
identified from a set of documents and a subset of the documents
that include those concepts with frequencies of occurrence that
occur within a range of concept frequencies are selected from the
set. The documents in the subset are assigned to clusters. Each
cluster includes a center and a radius, and is placed into a
display with the center of that cluster at a fixed distance from
the common origin. A portion of the placed clusters are further
placed along a common vector.
[0014] Still other embodiments of the present invention will become
readily apparent to those skilled in the art from the following
detailed description, wherein is described embodiments of the
invention by way of illustrating the best mode contemplated for
carrying out the invention. As will be realized, the invention is
capable of other and different embodiments and its several details
are capable of modifications in various obvious respects, all
without departing from the spirit and the scope of the present
invention. Accordingly, the drawings and detailed description are
to be regarded as illustrative in nature and not as
restrictive.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] FIG. 1 is a block diagram showing a system for efficiently
generating cluster groupings in a multi-dimensional concept space,
in accordance with the present invention.
[0016] FIG. 2 is a block diagram showing the software modules
implementing the document analyzer of FIG. 1.
[0017] FIG. 3 is a process flow diagram showing the stages of text
analysis performed by the document analyzer of FIG. 1.
[0018] FIG. 4 is a flow diagram showing a method for efficiently
generating cluster groupings in a multi-dimensional concept space,
in accordance with the present invention.
[0019] FIG. 5 is a flow diagram showing the routine for performing
text analysis for use in the method of FIG. 4.
[0020] FIG. 6 is a flow diagram showing the routine for creating a
histogram for use in the routine of FIG. 5.
[0021] FIG. 7 is a data structure diagram showing a database record
for a concept stored in the database 30 of FIG. 1.
[0022] FIG. 8 is a data structure diagram showing, by way of
example, a database table containing a lexicon of extracted
concepts stored in the database 30 of FIG. 1.
[0023] FIG. 9 is a graph showing, by way of example, a histogram of
the frequencies of concept occurrences generated by the routine of
FIG. 6.
[0024] FIG. 10 is a table showing, by way of example, concept
occurrence frequencies generated by the routine of FIG. 6.
[0025] FIG. 11 is a graph showing, by way of example, a corpus
graph of the frequency of concept occurrences generated by the
routine of FIG. 5.
[0026] FIG. 12 is a flow diagram showing the routine for creating
clusters for use in the routine of FIG. 5.
[0027] FIG. 13 is a table showing, by way of example, the concept
clusters created by the routine for FIG. 12.
[0028] FIG. 14 is a data representation diagram showing, by way of
example, a view of overlapping cluster generated by the system of
FIG. 1.
DETAILED DESCRIPTION
Glossary
[0029] Keyword: A literal search term which is either present or
absent from a document. Keywords are not used in the evaluation of
documents as described herein.
[0030] Term: A root stem of a single word appearing in the body of
at least one document.
[0031] Phrase: Two or more words co-occurring in the body of a
document.
[0032] A phrase can include stop words.
[0033] Concept: A collection of terms or phrases with common
semantic meanings.
[0034] Theme: Two or more concepts with a common semantic
meaning.
[0035] Cluster: All documents for a given concept or theme.
The foregoing terms are used throughout this document and, unless
indicated otherwise, are assigned the meanings presented above.
[0036] FIG. 1 is a block diagram showing a system 11 for
efficiently generating cluster groupings in a multi-dimensional
concept space, in accordance with the present invention. By way of
illustration, the system 11 operates in a distributed computing
environment 10, which includes a plurality of heterogeneous systems
and document sources. The system 11 implements a document analyzer
12, as further described below beginning with reference to FIG. 2,
for evaluating latent concepts in unstructured documents. The
system 11 is coupled to a storage device 13, which stores a
document warehouse 14 for maintaining a repository of documents and
a database 30 for maintaining document information.
[0037] The document analyzer 12 analyzes documents retrieved from a
plurality of local sources. The local sources include documents 17
maintained in a storage device 16 coupled to a local server 15 and
documents 20 maintained in a storage device 19 coupled to a local
client 18. The local server 15 and local client 18 are
interconnected to the system 11 over an intranetwork 21. In
addition, the document analyzer 12 can identify and retrieve
documents from remote sources over an internetwork 22, including
the Internet, through a gateway 23 interfaced to the intranetwork
21. The remote sources include documents 26 maintained in a storage
device 25 coupled to a remote server 24 and documents 29 maintained
in a storage device 28 coupled to a remote client 27.
[0038] The individual documents 17, 20, 26, 29 include all forms
and types of unstructured data, including electronic message
stores, such as electronic mail (email) folders, word processing
documents or Hypertext documents, and could also include graphical
or multimedia data. Notwithstanding, the documents could be in the
form of structured data, such as stored in a spreadsheet or
database. Content mined from these types of documents does not
require preprocessing, as described below.
[0039] In the described embodiment, the individual documents 17,
20, 26, 29 include electronic message folders, such as maintained
by the Outlook and Outlook Express products, licensed by Microsoft
Corporation, Redmond, Wash. The database is an SQL-based relational
database, such as the Oracle database management system, release 8,
licensed by Oracle Corporation, Redwood Shores, Calif.
[0040] The individual computer systems, including system 11, server
15, client 18, remote server 24 and remote client 27, are general
purpose, programmed digital computing devices consisting of a
central processing unit (CPU), random access memory (RAM),
non-volatile secondary storage, such as a hard drive or CD ROM
drive, network interfaces, and peripheral devices, including user
interfacing means, such as a keyboard and display. Program code,
including software programs, and data are loaded into the RAM for
execution and processing by the CPU and results are generated for
display, output, transmittal, or storage.
[0041] FIG. 2 is a block diagram showing the software modules 40
implementing the document analyzer 12 of FIG. 1. The document
analyzer 12 includes three modules: storage and retrieval manager
41, text analyzer 42, and display and visualization 44. The storage
and retrieval manager 41 identifies and retrieves documents 45 into
the document warehouse 14 (shown in FIG. 1). The documents 45 are
retrieved from various sources, including both local and remote
clients and server stores. The text analyzer 42 performs the bulk
of the text mining processing. The cluster 43 generates clusters 49
of highly correlated documents, as further described below with
reference to FIG. 12. The display and visualization 44 complements
the operations performed by the text analyzer 42 by presenting
visual representations of the information extracted from the
documents 45. The display and visualization 44 can also generate a
graphical representation which preserves independent variable
relationships, such as described in common-assigned U.S. Pat. No.
6,888,548, issued May 3, 2005, the disclosure of which is
incorporated by reference.
[0042] During text analysis, the text analyzer 42 identifies terms
and phrases and extracts concepts in the form of noun phrases that
are stored in a lexicon 18 maintained in the database 30. After
normalizing the extracted concepts, the text analyzer 42 generates
a frequency table 47 of concept occurrences, as further described
below with reference to FIG. 6, and a matrix 48 of summations of
the products of pair-wise terms, as further described below with
reference to FIG. 10. The cluster 43 generates logical clusters 49
of documents in a multi-dimensional concept space for modeling
semantic meaning. Similarly, the display and visualization 44
generates a histogram 50 of concept occurrences per document, as
further described below with reference to FIG. 6, and a corpus
graph 51 of concept occurrences over all documents, as further
described below with reference to FIG. 8.
[0043] Each module is a computer program, procedure or module
written as source code in a conventional programming language, such
as the C++ programming language, and is presented for execution by
the CPU as object or byte code, as is known in the art. The various
implementations of the source code and object and byte codes can be
held on a computer-readable storage medium or embodied on a
transmission medium in a carrier wave. The document analyzer 12
operates in accordance with a sequence of process steps, as further
described below with reference to FIG. 5.
[0044] FIG. 3 is a process flow diagram showing the stages 60 of
text analysis performed by the document analyzer 12 of FIG. 1. The
individual documents 45 are preprocessed and noun phrases are
extracted as concepts (transition 61) into a lexicon 46. The noun
phrases are normalized and queried (transition 62) to generate a
frequency table 47. The frequency table 47 identifies individual
concepts and their respective frequency of occurrence within each
document 45. The frequencies of concept occurrences are visualized
(transition 63) into a frequency of concepts histogram 50. The
histogram 50 graphically displays the frequencies of occurrence of
each concept on a per-document basis. Next, the frequencies of
concept occurrences for all the documents 45 are assimilated
(transition 64) into a corpus graph 51 that displays the overall
counts of documents containing each of the extracted concepts.
Finally, the most highly correlated terms and phrases from the
extracted concepts are categorized (transition 65) into clusters
49.
[0045] FIG. 4 is a flow diagram showing a method 70 for efficiently
generating cluster groupings in a multi-dimensional concept space
44 (shown in FIG. 2), in accordance with the present invention. As
a preliminary step, the set of documents 45 to be analyzed is
identified (block 71) and retrieved into the document warehouse 14
(shown in FIG. 1) (block 72). The documents 45 are unstructured
data and lack a common format or shared type. The documents 45
include electronic messages stored in messaging folders, word
processing documents, hypertext documents, and the like.
[0046] Once identified and retrieved, the set of documents 45 is
analyzed (block 73), as further described below with reference to
FIG. 5. During text analysis, a matrix 48 (shown in FIG. 2) of
term-document association data is constructed to summarize the
semantic content inherent in the structure of the documents 45. The
semantic content is represented by groups of clusters of highly
correlated documents generated through k-means clustering. As well,
the frequency of individual terms or phrases extracted from the
documents 45 are displayed and the results, including the clusters
43, are optionally visualized (block 74), as further described
below with reference to FIG. 14. The routine then terminates.
[0047] FIG. 5 is a flow diagram showing the routine 80 for
performing text analysis for use in the method 70 of FIG. 4. The
purpose of this routine is to extract and index terms or phrases
for the set of documents 45 (shown in FIG. 2). Preliminarily, each
document in the documents set 44 is preprocessed (block 81) to
remove stop words. These include commonly occurring words, such as
indefinite articles ("a" and "an"), definite articles ("the"),
pronouns ("I", "he" and "she"), connectors ("and" and "or"), and
similar non-substantive words.
[0048] Following preprocessing, a histogram 50 of the frequency of
terms (shown in FIG. 2) is logically created for each document 45
(block 82), as further described below with reference to FIG. 6.
Each histogram 50, as further described below with reference to
FIG. 9, maps the relative frequency of occurrence of each extracted
term on a per-document basis.
[0049] Next, a document reference frequency (corpus) graph 51, as
further described below with reference to FIG. 10, is created for
all documents 45 (block 83). The corpus graph 51 graphically maps
the semantically-related concepts for the entire documents set 44
based on terms and phrases. A subset of the corpus is selected by
removing those terms and phrases falling outside either edge of
predefined thresholds (block 84). For shorter documents, such as
email, having less semantically-rich content, the thresholds are
set from about 1% to about 15%, inclusive. Larger documents may
require tighter threshold values.
[0050] The selected set of terms and phrases falling within the
thresholds are used to generate themes (and concepts) (block 85)
based on correlations between normalized terms and phrases in the
documents set. In the described embodiment, themes are primarily
used, rather than individual concepts, as a single co-occurrence of
terms or phrases carries less semantic meaning than multiple
co-occurrences. As used herein, any reference to a "theme" or
"concept" will be understood to include the other term, except as
specifically indicated otherwise.
[0051] Next, clusters of concepts and themes are created (block 86)
from groups of highly-correlated terms and phrases, as further
described below with reference to FIG. 12. The routine then
returns.
[0052] FIG. 6 is a flow diagram showing the routine 90 for creating
a histogram 50 (shown in FIG. 2) for use in the routine of FIG. 5.
The purpose of this routine is to extract noun phrases representing
individual concepts and to create a normalized representation of
the occurrences of the concepts on a per-document basis. The
histogram represents the logical union of the terms and phrases
extracted from each document. In the described embodiment, the
histogram 48 need not be expressly visualized, but is generated
internally as part of the text analysis process.
[0053] Initially, noun phrases are extracted (block 91) from each
document 45. In the described embodiment, concepts are defined on
the basis of the extracted noun phrases, although individual nouns
or tri-grams (word triples) could be used in lieu of noun phrases.
In the described embodiment, the noun phrases are extracted using
the LinguistX product licensed by Inxight Software, Inc., Santa
Clara, Calif.
[0054] Once extracted, the individual terms or phrases are loaded
into records stored in the database 30 (shown in FIG. 1) (block
92). The terms stored in the database 30 are normalized (block 93)
such that each concept appears as a record only once. In the
described embodiment, the records are normalized into third normal
form, although other normalization schemas could be used.
[0055] FIG. 7 is a data structure diagram showing a database record
100 for a concept stored in the database 30 of FIG. 1. Each
database record 100 includes fields for storing an identifier 101,
string 102 and frequency 103. The identifier 101 is a monotonically
increasing integer value that uniquely identifies each term or
phrase stored as the string 102 in each record 100. The frequency
of occurrence of each term or phrase is tallied in the frequency
103.
[0056] FIG. 8 is a data structure diagram showing, by way of
example, a database table 110 containing a lexicon 111 of extracted
concepts stored in the database 30 of FIG. 1. The lexicon 111 maps
out the individual occurrences of identified terms 113 extracted
for any given document 112. By way of example, the document 112
includes three terms numbered 1, 3 and 5. Concept 1 occurs once in
document 112, concept 3 occurs twice, and concept 5 occurs once.
The lexicon tallies and represents the occurrences of frequency of
the concepts 1, 3 and 5 across all documents 45.
[0057] Referring back to FIG. 6, a frequency table is created from
the lexicon 111 for each given document 45 (block 94). The
frequency table is sorted in order of decreasing frequencies of
occurrence for each concept 113 found in a given document 45. In
the described embodiment, all terms and phrases occurring just once
in a given document are removed as not relevant to semantic
content. The frequency table is then used to generate a histogram
50 (shown in FIG. 2) (block 95) which visualizes the frequencies of
occurrence of extracted concepts in each document. The routine then
returns.
[0058] FIG. 9 is a graph showing, by way of example, a histogram 50
of the frequencies of concept occurrences generated by the routine
of FIG. 6. The x-axis defines the individual concepts 121 for each
document and the y-axis defines the frequencies of occurrence of
each concept 122. The concepts are mapped in order of decreasing
frequency 123 to generate a curve 124 representing the semantic
content of the document 45. Accordingly, terms or phrases appearing
on the increasing end of the curve 124 have a high frequency of
occurrence while concepts appearing on the descending end of the
curve 124 have a low frequency of occurrence.
[0059] FIG. 10 is a table 130 showing, by way of example, concept
occurrence frequencies generated by the routine of FIG. 6. Each
concept 131 is mapped against the total frequency occurrence 132
for the entire set of documents 45. Thus, for each of the concepts
133, a cumulative frequency 134 is tallied. The corpus table 130 is
used to generate the document concept frequency reference (corpus)
graph 51.
[0060] FIG. 11 is a graph 140 showing, by way of example, a corpus
graph of the frequency of concept occurrences generated by the
routine of FIG. 5. The graph 140 visualizes the extracted concepts
as tallied in the corpus table 130 (shown in FIG. 10). The x-axis
defines the individual concepts 141 for all documents and the
y-axis defines the number of documents 45 referencing each concept
142. The individual concepts are mapped in order of descending
frequency of occurrence 143 to generate a curve 144 representing
the latent semantics of the set of documents 45.
[0061] A median value 145 is selected and edge conditions 146a-b
are established to discriminate between concepts which occur too
frequently versus concepts which occur too infrequently. Those
documents falling within the edge conditions 146a-b form a subset
of documents containing latent concepts. In the described
embodiment, the median value 145 is document-type dependent. For
efficiency, the upper edge condition 146b is set to 70% and the 64
concepts immediately preceding the upper edge condition 146b are
selected, although other forms of threshold discrimination could
also be used.
[0062] FIG. 12 is a flow diagram 150 showing the routine for
creating clusters for use in the routine of FIG. 5. The purpose of
this routine is to build a concept space over a document collection
consisting of clusters 49 (shown in FIG. 2) of individual documents
having semantically similar content. Initially, a single cluster is
created and additional clusters are added using a k-mean clustering
technique, as required by the document set. Those clusters falling
outside a pre-determined variance are grouped into new clusters,
such that every document in the document set appears in at least
one cluster and the concepts and themes contained therein are
distributed over a meaningful range of clusters. The clusters are
then visualized as a data representation, as further described
below with reference to FIG. 14.
[0063] Each cluster consists of a set of documents that share
related terms and phrases as mapped in a multi-dimensional concept
space. Those documents having identical terms and phrases mapped to
a single cluster located along a vector at a distance (magnitude) d
measured at an angle .theta. from a common origin relative to the
multi-dimensional concept space. Accordingly, a Euclidean distance
between the individual concepts can be determined and clusters
created.
[0064] Initially, a variance specifying an upper bound on Euclidean
distances in the multi-dimensional concept space is determined
(block 151). In the described embodiment, a variance of five
percent is specified, although other variance values, either
greater or lesser than five percent, could be used as appropriate
to the data profile. As well, an internal counter num_clusters is
set to the initial value of 1 (block 152).
[0065] The documents and clusters are iteratively processed in a
pair of nested processing loops (blocks 153-164 and 156-161).
During each iteration of the outer processing loop (blocks
153-164), each document i is processed (block 153) for every
document in the document set. Each document i is first selected
(block 154) and the angle .theta. relative to a common origin is
computed (block 155).
[0066] During each iterative loop of the inner processing loop
(block 156-161), the selected document i is compared to the
existing set of clusters. Thus, a cluster j is selected (block 157)
and the angle .sigma. relative to the common origin is computed
(block 158). Note the angle .sigma. must be recomputed regularly
for each cluster j as documents are added or removed. The
difference between the angle .theta. for the document i and the
angle .sigma. for the cluster j is compared to the predetermined
variance (block 159). If the difference is less than the
predetermined variance (block 159), the document i is put into the
cluster j (block 160) and the iterative processing loop (block
156-161) is terminated. If the difference is greater than or equal
to the variance (block 159), the next cluster j is processed (block
161) and processing continues for each of the current clusters
(blocks 156-161).
[0067] If the difference between the angle .theta. for the document
i and the angle .sigma. for each of the clusters exceeds the
variance, a new cluster is created (block 162) and the counter
num_clusters is incremented (block 163). Processing continues with
the next document i (block 164) until all documents have been
processed (blocks 153-164). The categorization of clusters is
repeated (block 165) if necessary. In the described embodiment, the
cluster categorization (blocks 153-164) is repeated at least once
until the set of clusters settles. Finally, the clusters can be
finalized (block 165) as an optional step. Finalization includes
merging two or more clusters into a single cluster, splitting a
single cluster into two or more clusters, removing minimal or
outlier clusters, and similar operations, as would be recognized by
one skilled in the art. The routine then returns.
[0068] FIG. 13 is a table 180 showing, by way of example, the
concept clusters created by the routine 150 of FIG. 12. Each of the
concepts 181 should appear in at least one of the clusters 182,
thereby insuring that each document appears in some cluster. The
Euclidean distances 183a-d between the documents for a given
concept are determined. Those Euclidean distances 183a-d falling
within a predetermined variance are assigned to each individual
cluster 184-186. The table 180 can be used to visualize the
clusters in a multi-dimensional concept space.
[0069] FIG. 14 is a data representation diagram 14 showing, by way
of example, a view 191 of overlapping clusters 193-196 generated by
the system of FIG. 1. Each cluster 193-196 has a center c 197-200
and radius r 201-204, respectively, and is oriented around a common
origin 192. The center c of each cluster 193-196 is located at a
fixed distance d 205-208 from the common origin 192. Cluster 194
overlays cluster 193 and clusters 193, 195 and 196 overlap.
[0070] Each cluster 193-196 represents multi-dimensional data
modeled in a three-dimensional display space. The data could be
visualized data for a virtual semantic concept space, including
semantic content extracted from a collection of documents
represented by weighted clusters of concepts, such as described in
commonly-assigned U.S. Pat. No. 6,978,274, issued Dec. 20, 2005,
the disclosure of which is incorporated by reference.
[0071] For each cluster 193, the radii r 201-204 and distances d
197-200 are independent variables relative to the other clusters
194-196 and the radius r 201 is an independent variable relative to
the common origin 192. In this example, each cluster 193-196
represents a grouping of points corresponding to documents sharing
a common set of related terms and phrases. The radii 201-204 of
each cluster 193-196 reflect the relative number of documents
contained in each cluster. Those clusters 193-197 located along the
same vector are similar in theme as are those clusters located on
vectors having a small cosign rotation from each other. Thus, the
angle .theta. relative to a common axis' distance from a common
origin 192 is an independent variable within a correlation between
the distance d and angle .theta. relative similarity of theme.
Although shown with respect to a circular shape, each cluster
193-196 could be non-circular. At a minimum, however, each cluster
193-196 must have a center of mass and be oriented around the
common origin 192 and must define a convex volume. Accordingly,
other shapes defining each cluster 193-196 are feasible.
[0072] While the invention has been particularly shown and
described as referenced to the embodiments thereof, those skilled
in the art will understand that the foregoing and other changes in
form and detail may be made therein without departing from the
spirit and scope of the invention.
* * * * *