U.S. patent application number 10/736690 was filed with the patent office on 2004-07-01 for system and methods for an architectural framework for design of an adaptive, personalized, interactive content delivery system.
This patent application is currently assigned to SBC Technology Resources, Inc.. Invention is credited to Arellano, Javier B., Divine, Abha S., Dobes, Zuzana K., Liu, Guangtian.
Application Number | 20040128624 10/736690 |
Document ID | / |
Family ID | 31190556 |
Filed Date | 2004-07-01 |
United States Patent
Application |
20040128624 |
Kind Code |
A1 |
Arellano, Javier B. ; et
al. |
July 1, 2004 |
System and methods for an architectural framework for design of an
adaptive, personalized, interactive content delivery system
Abstract
At least one user profile is created for at least one user. The
profile may represent interests and trends of the user. A
multimedia story is developed based on the user profile and a
customized presentation of the multimedia story is generated. The
customized presentation is displayed to the user in accordance with
the delivery context.
Inventors: |
Arellano, Javier B.;
(Austin, TX) ; Divine, Abha S.; (Austin, TX)
; Dobes, Zuzana K.; (Austin, TX) ; Liu,
Guangtian; (Austin, TX) |
Correspondence
Address: |
GREENBLUM & BERNSTEIN, P.L.C.
1950 ROLAND CLARKE PLACE
RESTON
VA
20191
US
|
Assignee: |
SBC Technology Resources,
Inc.
Austin
TX
|
Family ID: |
31190556 |
Appl. No.: |
10/736690 |
Filed: |
December 17, 2003 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10736690 |
Dec 17, 2003 |
|
|
|
09393281 |
Sep 10, 1999 |
|
|
|
6694482 |
|
|
|
|
60099947 |
Sep 11, 1998 |
|
|
|
Current U.S.
Class: |
715/255 ;
707/E17.009; 715/273 |
Current CPC
Class: |
G09B 5/06 20130101; G06F
16/40 20190101 |
Class at
Publication: |
715/530 |
International
Class: |
G06F 015/00 |
Claims
What is claimed is:
1. A method for dynamically creating and delivering interactive
personalized multimedia content in an electronic environment,
comprising: providing a narrative framework; sequencing and editing
the narrative framework, based upon a user profile, to create a
dynamically generated narrative; modifying the dynamically
generated narrative based upon a delivery context; and rendering
the modified narrative.
2. The method of claim 1, further comprising updating the user
profile based on a user interaction history.
3. The method of claim 1, in which the user profile is created by
gathering data from the user, analyzing a history of the user,
monitoring data related to the user, and detecting patterns and
trends of the user.
4. The method of claim 1, in which the delivery context comprises a
display area.
5. The method of claim 1, in which the delivery context comprises a
network connection.
6. The method of claim 1, in which the narrative framework further
comprises content elements, each content element comprising a
plurality of types of representations having different media
characteristics, facilitating modification based upon delivery
context.
7. A method for generating a personalized broadcast program guide
that suggest programs to a user, the method comprising: creating a
standard program schedule based upon an initial time period;
obtaining a profile of the user; selecting suggested programs based
upon the user profile and the standard program schedule; resolving
constraints specified by display rules; and displaying the
suggested programs in accordance with the resolved constraints.
8. The method of claim 7, further comprising periodically refining
the user profile.
9. The method of claim 7, in which the user profile represents
interests of the user.
10. A method for dynamically assembling content, comprising:
adapting the content to a user; adapting the content based upon
available content; and adapting the content to a context at a
delivery time.
11. The method of claim 10, in which the context comprises a
display area.
12. The method of claim 10, in which the context comprises a
network connection.
13. A computer readable medium storing a program for dynamically
creating and delivering interactive personalized multimedia content
in an electronic environment, comprising: a retrieving source code
segment that retrieves a narrative framework; an editing source
code segment that sequences and edits the narrative framework,
based upon a user profile, to create a dynamically generated
narrative; a delivery context source segment that modifies the
dynamically generated narrative based upon a delivery context; and
a rendering source code segment that renders the modified
narrative.
14. The medium of claim 13, further comprising a profile updating
source code segment that updates the user profile based on a user
interaction history.
15. The medium of claim 13, further comprising a profile creation
source code segment that creates the user profile by gathering data
from the user, analyzing a history of the user, monitoring data
related to the user, and detecting patterns and trends of the
user.
16. The medium of claim 13, in which the delivery context comprises
a display area.
17. The medium of claim 13, in which the delivery context comprises
a network connection.
18. The medium of claim 13, in which the narrative framework
further comprises content elements, each content element comprising
a plurality of types of representations having different media
characteristics, facilitating modification based upon delivery
context.
19. A computer readable medium storing a program for generating a
personalized broadcast program guide that suggest programs to a
user, the program comprising: a creation source code segment that
creates a standard program schedule based upon an initial time
period; a profile source code segment that obtains a profile of the
user; a selecting source code segment that selects suggested
programs based upon the user profile and the standard program
schedule; a constraint source code segment that resolves at least
one constraint specified by at least one display rule; and a
display source code segment that displays the suggested programs in
accordance with the resolved constraint.
20. The medium of claim 19, further comprising a refining source
code segment that periodically refines the user profile.
21. The medium of claim 19, in which the user profile represents
interests of the user.
22. A computer readable medium storing a program for dynamically
assembling content, comprising: a user source code segment that
adapts the content to a user; a content source code segment that
adapts the content based upon available content; and a delivery
context source code segment that adapts the content to a context at
a delivery time.
23. The medium of claim 22, in which the context comprises a
display area.
24. The medium of claim 22, in which the context comprises a
network connection.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of U.S. patent
application Ser. No. 09/393,281, filed on Sep. 10, 1999, which
claims the benefit of U.S. Provisional Application No. 60/099,947,
filed on Sep. 11, 1998, the contents of both of which are expressly
incorporated by reference herein.
BACKGROUND OF THE INVENTION
[0002] 1. Field of the Invention
[0003] This invention relates to architectural frameworks for
development of multimedia applications, and more specifically to
architectural frameworks for developing adaptive, personalized,
interactive multimedia applications and services.
[0004] 2. Background and Material Information
[0005] In general, designing and implementing interactive systems
is a complex and lengthy task. If one adds multimedia to the
development equation, the level of complexity, the content
variability and the required management support immediately soars
and can overwhelm the development process. On the other hand, there
presently exists a very dynamic and rich environment that
potentially offers a business opportunity allowing one to build a
family of applications that can be strongly differentiated by
leveraging the same rich and complex content. Thus, a double edged
sword exists.
[0006] If one examines the requirements, present and future, of
information, more specifically multimedia information, one
discovers that in general these requirements are a response to the
"dynamics of information". These dynamics can be characterized by:
constantly changing information; broad user population; and
heterogenous landscape of delivery devices. If one grafts onto this
picture the dynamics of collaboration or computer-supported work in
synchronous or asynchronous mode, and potentially the technical
problems are further compounded by the opportunity for
differentiated and value-added services increases, i.e., the
double-edged sword once again.
[0007] The best way to understand a system is to have an
abstraction that describes a simpler picture of the structure and
the machinery. A metaphoric vehicle is useful in that it allows
framing of a problem and likewise offers a solution that supports
and promotes flexibility, expressiveness, and scalability in
information design and display. One can say that a multimedia
presentation is like "telling a story". The presentation author is
attempting to convey a communicative intent and more than likely it
was constructed with a particular audience in mind, as well as a
specific context and medium.
[0008] The computational narrative model, as disclosed in Brooks,
K. M., "Do Agent Stories Use Rocking Chairs: The Theory and
Implementation of One Model for Computational Narrative",
Processings of the Fourth ACM International Multimedia Conference
on Intelligent User Interfaces, ACM Press 1996 and Murtaugh, M.
"The Automatist Storytelling System: Putting the Editor's Knowledge
in Software", MIT MS Thesis, 1996, offers a metaphor for creating
tools that are capable of going beyond traditional storytelling by
enhancing the editorial through the leveraging of the computer's
ability to support rapid decision making. According to Brooks,
narrative represents the universe of story elements for a given
story, i.e., the collection of possibility, and narration as a
specific navigation through that universe.
[0009] As shown in FIG. 1, the process of computational
storytelling involves the author supplying the elements of the
story and the structure to organize the story elements. The agent
takes the elements of the story and the structure and generates a
story, more precisely, a narrative, and presents the "story" to an
audience. The audience reacts and generates feedback to the agent.
The agent acting as proxy for the author can react to the feedback
by modifying the presentation.
[0010] Some current conceptual views regarding the techniques or
technical strategies that are related to developing a framework for
creating and delivering interactive multimedia applications
include: dynamic presentation, behavior-based artificial
intelligence, memory-based learning, and user modeling.
[0011] Regarding dynamic presentation, Maybury, M. "Intelligent
Multimedia Interfaces", AAAI/MIT Press, Cambridge, Mass., 1993,
discloses that automatic multimedia presentation involves the
stages of content selection (i.e., what to say), media allocation
(i.e., what media to present it in), and media realization (i.e.,
how to say it). The focus is the media allocation and realization
phase. More specifically, how to create presentations without
knowing all "facts" during design time. The basic objective is to
enable the creation of user interfaces that are sufficiently
flexible and adaptive to "re-invent" themselves at run-time. To
support this flexibility and adaptability, an interface needs to be
developed not to a final fixed form, but to some protean form that
can be reshaped at run time, time after time, to meet the
requirements of any situation that invalidates its current
form.
[0012] Szekely P., "Retrospective and Challenges for Model-Based
Interface Development", USC Information Sciences Institute, Marina
del Rey, Calif., 1996, proposes one architecture. Szekely discloses
that a model-based user interface calls for a model of the
interface that is organized as three levels of abstraction: task
and domain model for the application, an abstract user interface
specification, and a concrete user interface specification. The
task model represents the task that the user will undertake to
perform with the application. The domain model represents the data
and the operations that are part of an application.
[0013] The second level, according to Szekely, is the abstract user
interface specification. At this level, an interface is defined in
terms of abstract interaction units, information elements, and
presentation units. The abstract interaction units are low-level
interactions such as showing a presentation unit. Information
elements represent data such as attributes extracted from the
domain model. Presentation units are abstractions of windows and
specify collections of abstract presentation units and information
elements that are to be treated as a unit. Basically, the abstract
user interface specification abstractly specifies the way
information will be presented in the interface and form for
interaction with the information.
[0014] The third level, according to Szekely, is the concrete user
interface specification that specifies rendering styles for the
presentation units, i.e., widgets. Different model-based user
interface (UI) frameworks differ in what models they provide.
Szekely discloses that some frameworks have one model but not the
other two, while in other cases, only one model is defined. FIG. 2
is a flowchart showing a generic model-based presentation system as
disclosed in Szekely.
[0015] An alternative reasoning framework has emerged in Artificial
Intelligence circles called Behavior-Based AI (BBAI) as disclosed
in Maes, P. "Behavior-Based Artificial Intelligence", Proceedings
of Second Animat Conference on Adaptive Behavior, 1992. This new
approach represents more of a different way of thinking about a
problem domain than an alternative reasoning technique. The
knowledge-based approach involves capturing the rules to solve a
domain. In contrast, the BBAI approach relies on a set of lower
level competencies which are each experts at solving one part of
the larger problem domain as disclosed in Brooks.
[0016] Additionally, the BBAI approach tends to emphasize the
system behavior as opposed to the system knowledge. Furthermore,
BBAI stresses that the system should be situated in its environment
and have direct (or as close as possible) access to the problem
domain. This framework enables a system to bring together different
classes of reasoning techniques, heuristic, statistical, etc., and
incorporate each application of a technique into a lower-level
competency module or "expert". In effect, these modules come
together to form a multi-agent system.
[0017] Another learning technique, as disclosed in Stanfield, C. et
al., "Toward Memory-Based Reasoning", Communications of the ACM,
20(12), ACM Press, 1986, is memory based learning. Basically,
memory-based learning entails comparing a new situation against
each of the situations which have occurred before. Given a new
situation, a memory-based learning agent looks at the actions taken
in N of the "closest" situations or "nearest neighbors" to predict
the action for a new situation. FIG. 3 shows a diagram of the
memory-based reasoning approach.
[0018] User modeling is an inexact science but its predictions need
not be perfect to be useful. User models can range from simply
storing a bit indicating if the user is a novice or expert in terms
of an application, to a rich, complex snapshot of the user's
interest and preferences. Once a universe of user models is
collected and maintained, the models may serve as data for further
analysis to find pattern and trends in this universe. These are
some of many critical issues relevant to user modeling.
[0019] User models may be either pragmatic or cognitive as
disclosed in Orwant, J, "Doppelganger Goes To School: Machine
Learning for User Modeling,", MIT MS Thesis, 1993. The cognitive
type user models are not connected to any application or
applications in particular. This type of user model is attempting
to capture a user's beliefs, goals and plans in a general sense. A
pragmatic user model is not driven by a cognitive model but by the
practical aspects of the environment, e.g., applications. The
pragmatic user model can be characterized by the collection of raw
observational data and making sense of the data after the fact. In
another sense, the cognitive model is a top down approach and the
pragmatic model is a bottom up approach.
[0020] Conceptually, individuals can take on particular roles,
e.g., business, leisure, parental, professional. These are defined
as persona in a user modeling sense. Personae could be utilized to
partition the user model space into more manageable chunks.
[0021] A pragmatic user model can make use of filtering techniques.
Content-based filtering involves selecting items for the user based
on correlations between content of the items and the user's
preferences. For example, a personalized TV program guide uses
information about a television program, such as the program's type
and its level of violence to predict whether or not to recommend
and include the show in a personalized line-up. Generally, users
rely on exploration to discover new items of interest, i.e.,
serendipitous items. By definition, content-based filtering has no
inherent capability to generate these sort of items. In practice,
one must add special purpose techniques to add these capabilities
to content-based filtering to introduce serendipity. For example, a
user might be unaware of their interest in true crime shows until
she actually comes across "America's Most Wanted". Assuming no
indications of this trend had previously surfaced, content-based
filtering would have never detected this particular interest.
Content-based filtering simply does not allow a user to expand
their interests.
[0022] Social-based filtering is one potential solution to the
serendipity dilemma. Social-based filtering basically attempts to
exploit similarities between the profiles of different users to
filter content. Social-based filtering can be an extension of
content-based filtering. Once a user model is constructed and is
being maintained, social-based filtering algorithms can compare
this model to other user models and weigh each model for the level
of similarity with the user model. Orwant, J., "For Want of a Bit
The User Was Lost: Cheap User Modeling", IBM Systems Journal, vol.
35, Nos 3&4, 1996 and Shardanand, U., "Social Information
Filtering for Music Recommendation", MIT MS Thesis, 1994 disclose
algorithms for computing similarity between user models.
SUMMARY OF THE INVENTION
[0023] Accordingly, the present invention is directed to a method
for design of an adaptive personalized interactive content delivery
system that substantially obviates one or more of the problems
arising from the limitations and disadvantages of the related
art.
[0024] It is an object of the present invention to provide an
architectural framework that is composed of a collection of classes
for building interactive multimedia applications and services.
[0025] It is a further object of the present invention to provide
an architectural framework that will enable a developer to build up
locations that deliver services that dynamically adapt to the user,
the content, and the delivery context, resulting in an effective
contextual personalized on-line experience.
[0026] Another object of the present invention is to provide an
architectural framework that supports and promotes the creation of
reusable components for building personalized interactive
multimedia presentations of complex applications.
[0027] Accordingly, one aspect of the present invention is directed
to a method for creating and delivering an interactive multimedia
application that can dynamically adapt to at least one user. At
least one user model is created for at least one user, the at least
one user model represents interests and trends of the at least one
user. A multimedia story is developed based on the at least one
user model. A customized presentation of the multimedia story is
generated where the at least one multimedia story allows for
multiple presentations of the multimedia story. The customized
presentation is displayed to the at least one user. The customized
presentation is modified based on input from the at least one
user.
[0028] In another aspect of the present invention, the story
includes a protean-like narrative.
[0029] In still another aspect of the present invention, the
creating includes: gathering data from the at least one user;
analyzing a history of the at least one user; monitoring data
related to the at least one user; detecting patterns and trends of
the at least one user; and preparing the at least one user model
based on the gathering, analyzing, monitoring, and detecting. The
at least one user model is modified periodically based on
information obtained from periodically repeating the gathering,
analyzing, monitoring, and detecting.
[0030] In a further aspect of the present invention, the at least
one user model includes a set of models.
[0031] In another aspect of the present invention, the story
includes at least one content element. The at least one content
element characterizes data of the interactive multimedia
application. The at least one content element is representable in
multiple forms.
[0032] In still another aspect of the present invention, the at
least one user model comprises a set of models.
[0033] In a further aspect of the present invention, the multiple
forms include text, audio, video, image, or multimedia.
[0034] In another aspect of the present invention, the invention
includes filtering the at least one content element to produce a
subset of the at least one content element, each content element in
the subset of at least one content elements selected based on
semantics of the filtering.
[0035] In still another aspect of the present invention, the
invention includes assembling the subset of at least one content
elements to produce the multimedia story. The multimedia story may
be personalized to the at least one user.
[0036] In a further aspect of the present invention, the generating
includes: determining the delivery environment of the at least one
user; determining the style look and feel for the presentation;
determining the narrative context for the presentation. The
narrative context defined by the semantics of the interactive
multimedia application; and creating a customized presentation of
the multimedia story based on the delivery environment, the style
look and feel, and the narrative context.
[0037] In another aspect of the present invention, a weighted value
may be assigned to each interest and trend of the at least one
user. The weighted value represents the relative importance of each
interest and trend with respect to the at least one user's apparent
interests.
[0038] In still another aspect of the present invention, the
interactive multimedia application may be created using
object-oriented design techniques.
[0039] In a further aspect of the present invention, the invention
is directed to a method for creating and delivering an interactive
multimedia application that can dynamically adapt to at least one
user that includes: creating a story engine, the story engine may
be created by the interactive multimedia application; creating a
user model manager, the user model manager may be created by the
interactive multimedia application; providing the story engine with
application-specific information and user information; providing
the story engine with a user model from the user model manager, the
user model represents interests and trends of the at least one
user; providing the story engine with a narrative structure, the
narrative structure defined by the semantics of the interactive
multimedia application; producing user-relevant content, the
user-related content may be produced by applying filters to the
content model, the user model may be used for filtering purposes;
creating a presentation engine, the presentation engine may be
created by the interactive multimedia application; providing the
presentation engine with the narrative structure, content model,
and a presentation model, the content model may be empty;
generating an abstract presentation defined by the presentation
model, the abstract presentation may be generated by the
presentation engine; generating a concrete presentation by using
the abstract presentation's heuristics, the concrete presentation
may be generated by the presentation engine; and displaying the
concrete presentation by the presentation engine, wherein the
abstract presentation and the presentation engine autonomously
handle interaction scenarios, and trends and patterns are
periodically recomputed based on interaction histories and the user
models, the interactive multimedia application may be
self-improving and self-sustaining.
[0040] In another aspect of the present invention, the interactive
multimedia application may be created using object-oriented design
techniques.
[0041] In still another aspect of the present invention, the
interactive multimedia application may be created using JAVA.
[0042] In a further aspect of the present invention, the invention
is directed to a system for creating and delivering interactive
multimedia applications that dynamically adapt to a user that
include: a user modeling subsystem where the user modeling
subsystem creates and maintains at least one user model for each
user, each at least one user model represents interests and trends
of each user; a story engine subsystem where the story engine
subsystem selects appropriate content elements and collects and
organizes these elements in accordance with a narrative framework;
and a presentation subsystem where the presentation subsystem
generates a presentation to the user, the presentation generated
uses the narrative framework.
[0043] In another aspect of the present invention, the user
modeling subsystem includes: a user model editor; a user modeling
manager; an analysis engine; and a user model database.
[0044] In still another aspect of the present invention, the story
engine subsystem includes: a first database where the first
database contains a content model library, the first database
accesses content from a content database; and a second database
where the second database contains a story template library.
[0045] In a further aspect of the present invention, the
presentation subsystem includes: a first database where the first
database contains at least one presentation models; a presentation
builder; a second database where the second database contains a
concrete presentation library; and a presentation engine.
[0046] In another aspect of the present invention, the content
elements may represent pieces of information that can be presented
via one or more media types.
[0047] In still another aspect of the present invention, the
presentation may be constrained by a narrative style, narrative
context, and demands of the delivery environment of the user.
[0048] Other exemplary embodiments and advantages of the present
invention may be ascertained by reviewing the present disclosure
and the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0049] The present invention is further described in the detailed
description which follows in reference to the noted plurality of
drawings by way of non-limiting examples of preferred embodiments
of the present invention in which like reference numerals represent
similar parts throughout the several views of the drawings and
wherein:
[0050] FIG. 1 is a flowchart showing a conventional dynamic
storytelling structure;
[0051] FIG. 2 is a flowchart showing a conventional generic
model-based presentation system;
[0052] FIG. 3 is a diagram showing a conventional memory-based
reasoning system;
[0053] FIG. 4 is a flowchart showing a multiagent storytelling
system according to the present invention;
[0054] FIG. 5 is a diagram showing a model abstraction view
controller architecture according to the present invention;
[0055] FIG. 6 is a flow diagram showing a functional overview of an
application's framework according to the present invention;
[0056] FIG. 7 is a flow diagram showing an architectural framework
system architecture according to the present invention;
[0057] FIG. 8 is a flowchart showing an exemplary presentation
object model according to the present invention;
[0058] FIG. 9 is a flowchart showing an exemplary object model for
framework according to the present invention;
[0059] FIG. 10 is a flow diagram of a model of content and story
and an exemplary representative application according to the
present invention;
[0060] FIG. 11 is a flowchart of an exemplary object model for a
representative application according to the present invention;
[0061] FIG. 12 is an exemplary interaction diagram for
bootstrapping use case according to the present invention;
[0062] FIG. 13 is a flow chart showing the relationships between a
community model, user models, and user personae according to the
present invention;
[0063] FIG. 14 is a diagram showing semantics and content;
[0064] FIG. 15 is a flow diagram showing multiple representations
of content according to the present invention;
[0065] FIG. 16 is a diagram showing selective assembly of content
according to the present invention;
[0066] FIG. 17 is diagram showing an anatomy of an application
according to the present invention;
[0067] FIG. 18 is a diagram showing a thick client-thin server
partitioning of an application according to the present
invention;
[0068] FIG. 19 is a diagram showing a thin client-thick server
partitioning of an application according to the present
invention;
[0069] FIG. 20 is a diagram showing a peer-to-peer distributed
partitioning of an application according to the present
invention;
[0070] FIG. 21 is a user modeling class diagram according to the
present invention;
[0071] FIG. 22 is a story engine class diagram according to the
present invention;
[0072] FIG. 23 is a presentation engine class diagram according to
the present invention;
[0073] FIG. 24 is a content classes class diagram according to the
present invention;
[0074] FIG. 25 is a metadata classes class diagram according to the
present invention;
[0075] FIG. 26 is block diagram of exemplary content database
according to the present invention;
[0076] FIG. 27 is block diagram of a high level view of an
exemplary web-based service;
[0077] FIG. 28 is a flowchart of an exemplary story model according
to the present invention;
[0078] FIG. 29 is a flowchart of exemplary HTML presentation
templates according to the present invention;
[0079] FIG. 30 is a flowchart of generation of a presentation
structure according to the present invention; and
[0080] FIG. 31 is a flowchart of an exemplary final form of a
presentation of a scene according to the present invention.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0081] The particulars shown herein are by way of example and for
purposes of illustrative discussion of the embodiments of the
present invention only and are presented and the cause of providing
a useful and readily understood description of the principles and
conceptual aspects of the present invention. In this regard, no
attempt is made to show structural details of the present invention
in more detail than is necessary for the fundamental understanding
of the present invention. The description taken with the drawings
make it apparent to those skilled in the art how the several forms
of the present invention may be embodied in practice.
[0082] The present invention is an application framework for
creating and delivering interactive multimedia applications and/or
services. The applications framework according to the present
invention will enable the deployment of applications that
dynamically adapt to the user through the personalization of
content and presentation. The applications framework may be a
software infrastructure that supports and promotes the creation of
reusable components for building personalized interactive
multimedia presentations of complex applications. In addition, the
applications framework according to the present invention, is a
software foundation for enabling community and collaboration in a
networked world.
[0083] The applications framework, according to the present
invention, allows one to create an application-specific structure
and utilizes structure to create multiple presentations from the
same set of application-specific content where agents with
different style goals or communicative intent make sequencing and
editing decisions constrained by the user's preferences and the
characteristics of the content and the delivery device.
[0084] As discussed previously, the best way to understand a system
is to have an abstraction that describes a simpler picture of the
structure and the machinery. The architecture of an applications
framework according to the present invention may be described by a
series of abstractions, each one giving more and more concrete
artifacts. The application framework according to the present
invention encompasses many elements ranging from a dynamic
presentation system, a multiagent system, to a memory-based user
modeling system and a multi-paradigm application framework.
[0085] Reflecting on the originally discussed metaphor, the present
invention decomposes the agent that inhabits the dynamic
storytelling structure down to a set of agents, each agent
corresponding to an area of competency. These include user
modeling, storytelling, and presentation design/generation. This
model subscribes to the behavior-based AI approach where each
agent, an expert in its own right, brings together their own lower
lever competencies to create a higher level competence-emergent
behavior.
[0086] As discussed previously, the requirements for media
information is a response to the dynamics of information where the
dynamics are characterized by: constantly changing information,
broad user population, and heterogeneous landscape of delivery
devices. The architectural framework, according to the present
invention, solves the challenge of constantly changing information
with the story agent, the challenge of broad user population with
the user agent, and the challenge of heterogeneous landscape of
delivery devices with the presentation agent. FIG. 4 shows a
diagram of a multiagent storytelling system according to the
present invention. The user agent, story agent and presentation
agent, according to the present invention, will now be discussed in
more detail.
[0087] User Agent
[0088] The User Agent embodies the user modeling aspect of the
architectural framework system, according to the present invention.
In effect, the user modeling system is the user to the system. It
encompasses several components that together enable the capture of
relevant interaction data, the storage and representation of a
user's interests, trends, etc., and the capability to manage and
analyze the resulting user data. The User Agent in the
architectural framework, according to the present invention,
handles capturing user feedback, maintaining the user's profile,
structuring interests and preferences, and making sense of a user's
interaction history. A User Model Editor allows the end user and/or
administrator to specify the user's interest along with a measure
of confidence.
[0089] A sensor is used to capture user interaction at the source
and understands how to extract the relevant information from the
user feedback. The sensor may be in the form of a software program.
The sensor acts as a proxy for the user modeling system. Different
kinds of sensors may be employed to gather information at their
respective sources. A sensor knows how often to gather data, what
data to monitor, and how to decode the present event into user
profile data. A sensor may be one or several software components,
where each component may capture and/or monitor different user
information.
[0090] The user modeling system, according to the present
invention, provides for a repository for representations of each
users' preferences. A user's preference and taste, along with
demographic information, constitutes a user model. Additionally,
each user model needs to maintain some form of history that
describes the relevant "discourse" of interaction that supports the
user's preferences contained therein. Sensors provide the
interaction data.
[0091] In the architectural framework, according to the present
invention, the nature of the representation of a user model is
driven by the feature-based content that characterize the
application data. As a result, the user models are structured as a
set of models for each domain or application (e.g., TV viewing,
shopping, etc.). This is in contrast to the persona concept
described previously. Persona relate to a role rather than an
application's specific profile. A persona is a model that exists
independent of an application oriented or a domain oriented
model.
[0092] In order for the user models to be useful to other
components in the architectural framework of the present invention
(e.g., the Story Agent), the User Agent is introspective and
computes/detects trends and patterns. The User Agent constantly
reevaluates the importance of features and the values the features
can hold in the domain oriented models. In the architectural
framework according to the present invention, the User Agent
includes a reasoning component, an analysis engine, that analyzes a
user's data and computes correlations between features and
feature-values as defined by the memory based learning framework
described previously.
[0093] Story Agent
[0094] The Story Agent according to the architectural framework of
the present invention, selects the appropriate content elements and
collects and organizes these elements as prescribed by an
appropriate class of narrative framework. This narrative framework
represents a "prototype story" that is utilized by the Presentation
Agent to generate a customized presentation.
[0095] The process of selecting content is driven by the content
types as specified by the content model. The User Agent's user
model is utilized in the selection process. As described
previously, the User Agent is responsible for analyzing a user's
data and computing correlations between features and feature-values
and carrying out its role in the memory based learning scheme.
[0096] Given some application-specific criteria, the Story Agent is
responsible for choosing the best content elements by using the
embedded logic provided by the narrative framework. Once the story
agent has selected the "best set" of content elements, the
narrative framework now populated with content elements is supplied
to the Presentation Agent's computation of the final story's look
and feel.
[0097] Presentation Agent
[0098] Using the dynamic storytelling metaphor discussed
previously, the Presentation Agent according to the present
invention takes the dynamically generated narrative (the populated
narrative framework) and creates a presentation. The Presentation
Agent generates a presentation by design where the design is
constrained by a narrative style (narrative context), by a
particular look and feel (style context), and by the demands of the
delivery environment (delivery context). The agency of this agent
is brought to life by specifying an abstract representation of a
presentation. The model based approach to user interface design
supports the idea of an abstract, declarative representation of a
user interface. The approach according to the present invention,
borrows from this approach, but only superficially, mainly in high
level terms.
[0099] The presentation generation aspect of the architectural
framework according to the present invention is a novel yet simple
solution. Presentation design involves four types of components:
abstract presentations, concrete presentations, reactors, and
design constraints.
[0100] An abstract presentation is a meta-presentation. An abstract
presentation is a loose design representation of a concrete
presentation. Abstract presentations may have parts that themselves
are abstract presentations. This results in the creation of an
hierarchically composed presentation. Defining a modular loosely
structured presentation not restricted to a final form and layout
enables the creation and maintenance of flexible and dynamic
multimedia presentations. An abstract presentation maps to and
represents a content element. It serves as the link between
interface and content.
[0101] Concrete presentations are either standard user interface
components, i.e., widgets or wrapper-style components that
repurpose other widgets. A wrapper is a software component (e.g. an
object) that hides the real software or device that is being used.
The concrete presentation objects are the actual user interface
objects that appear on the display.
[0102] Reactors are action objects that associate concrete
presentation events and an operation on a content element. Reactors
are registered (i.e., associated with appropriate software
components to handle the operation) and managed by abstract
presentations.
[0103] Design constraints are heuristics that guide in the final
make-up of the presentation, including layout, style and content
make-up of the presentation. These rules can be classified into
three categories: narrative context, style context, and delivery
context. Narrative context are rules for narrative-specific
realization, e.g., in a personalized TV program guide creating
programs, line-ups along thematic lines. Style context are rules
for style, look and feel, e.g., a tabular view versus a 3D
landscape of the schedule in a personalized TV program schedule.
Delivery context are rules that deal with the delivery environment,
e.g., real estate allocated for a desk top versus a PDA (Personal
Digital Assistant), connection protocol, browser, modem speed,
etc.
[0104] The abstract presentation and design constraints represent
the declarative aspects of the architectural framework according to
the present invention, and together they serve as an abstraction of
the final interface. As shown in FIG. 5, this framework is an
extension of the known Model-View-Controller (MVC) user interface
architecture. The MVC paradigm partitions a user interface
architecture into three components: a model (an abstraction of the
problem domain), a view (a visual representation of the model or
part of the model), and a controller (an input handler). Typically,
each of these components is a collection of one or more
objects.
[0105] In the known MVC user interface architecture, the user
issues some form of input or command which is captured by the
controller. The controller, in turn, takes the command (e.g., mouse
click, key stroke, speech utterance, etc.) and translates it into
an action on a model object. A controller is associated with a
view. The view displays the state of the model and relies on a
dependency mechanism whereby the view registers itself as dependent
on the model. As a result, whenever the model undergoes a change,
the model broadcasts a notification to all dependent views that a
change has occurred. The views in turn query the model to retrieve
the details of the change and update themselves accordingly.
[0106] The model encapsulates the semantics of application objects.
Subcomponents of the model hide the details of communication,
database access, etc. The model is not aware of the views (or the
controllers for that matter) but only through anonymous message
broadcasting does the model communicate with its dependent
views.
[0107] The MVC architecture promotes modularity and the decoupling
of application data from the mechanisms to view and manipulate that
data. So as a result, this allows for software reuse, both in a
design and implementation sense. In theory, one can reuse a model
in different application, i.e., the same model, different views.
Additionally, one may reuse a view (or controller) in different
applications, i.e., same view, different models.
[0108] The known MVC architecture assumes a set of views statically
bound to each model. The architectural framework according to the
present invention, has extended this architecture by decomposing
the view controller compliment set into an abstract presentation
component and a concrete presentation component as shown in FIG. 5.
The concrete component encapsulates the traditional view-controller
objects, but only in an incomplete and unrealized state. The
abstract component dynamically generates and manages the final form
look and feel of the concrete components. Moreover, by representing
the interface in abstract terms, the present invention effectively
enables the creation of dynamically bound views not possible in the
currently known MVC tradition. The architectural framework
according to the present invention, defines a declarative based
Model-Abstraction-View-Controller user interface architecture.
[0109] FIG. 6 shows a functional overview of an application
framework according to the present invention. In FIG. 6, the items
in the circles represent subsystems. The items inside the parallel
lines generally represent models, except for the user feedback. The
process of developing an application using an application framework
according to the present invention, consists of designing or
specifying (and possibly reusing or repurposing pre-existing
models) models required by the agents of the framework. Basically,
the models for content presentation and the user need to be
created. The designer of the application must design and specify: a
content model, a story model (narrative structure), a presentation
model, and a domain user model for the user model. Creating a
content model is building a typical model of the application, such
as called for in the traditional Model-View-Controller sense. The
content model is a representation of the semantics that
characterize the content elements that make up an application,
e.g., a TV program schedule consisting of program line-ups where
the line-ups consist of time slots populated by TV programs slated
to be broadcast.
[0110] The Story Model (narrative structure) is a "protean"-like
content model. It serves to organize the content elements that have
been selected as candidates for the presentation generation phase.
The narrative is basically the universe of possibilities as defined
by the semantics of the application, e.g., creating a personalized
TV program guide that presents a set of personalized line-ups
involves a narrative structure that groups candidate programs
according to their start time (candidacy is a complex step of
consulting a user model and predicting the best content elements
given a set of application specific criteria).
[0111] The presentation model specifies the components of a
presentation, the behavior (linking actions in the presentation
units to application content functions) and design heuristics
(rules that guide setting the presentation style, presentation
context, and display context), e.g., in the personalized TV program
guide, if the presentation context is thematic then generate a
personalized line-up where each line-up represents a particular
theme given the candidate set. The Presentation Builder stores the
presentation model in persistent storage.
[0112] Building a domain model for the user model involves
accounting for the features that make up the content elements in an
application, e.g., using the personalized TV program guide once
again, features would include, e.g., program type, level of
violence, etc.
[0113] Architecture/Subsystems
[0114] FIG. 7 shows a flowchart of an exemplary system architecture
according to an architectural framework according to the present
invention. The User Model Manager manages the storage and retrieval
of user's interests and trends and interfaces to other subsystems.
The Analysis Engine is used to analyze interaction histories and
detect patterns and trends. The User Model Editor is an
administrative tool that allows a user and/or administrator to
modify a user model. The user modeling subsystem uses user models
and community models. A community model is a model of a group of
users that share some common interest or trend.
[0115] The Story Engine selects application-specific content to
serve as the addition for a new presentation, and generates a
narrative/story that allows for multiple play out of different
presentations of the story. The Story Engine uses the story model
and the user model. The Presentation Engine is responsible for
interpreting an abstract presentation model and creating concrete
presentation objects. The Presentation Engine also resolves
constraints imposed by abstract presentations, input content, and
display context as part of the final realization of the concrete
presentation objects. The Presentation Engine uses the presentation
model and the story model. The Presentation Builder is responsible
for storing presentation models in persistent form. The
Presentation Builder uses the presentation model.
[0116] Designing and delivering an application using an
architectural framework according to the present invention
generally include:
[0117] 1. The application creates a Story Engine (SE) and a User
Model Manager (UMM).
[0118] 2. The application informs the SE who the user is and any
other application-specific information deemed necessary.
[0119] 3. The SE requests a user model from the UMM.
[0120] 4. Upon receiving a user model from the UMM, the SE is
handed a narrative structure as defined by the application
semantics.
[0121] 5. Applying filters contained in the narrative framework,
the SE places the results (i.e., content elements) in the narrative
structure.
[0122] 6. The application creates a Presentation Engine (PE).
[0123] 7. The PE is handed the narrative structure for the
application, the content model, and the appropriate presentation
model.
[0124] 8. The PE generates an abstract presentation as defined by a
presentation model.
[0125] 9. The PE exercises the abstract presentation's heuristics
and generates a concrete presentation.
[0126] 10. The PE displays the concrete presentation.
[0127] The application is self-sustaining at this point. The
abstract presentation, along with the presentation engine, handle
autonomously most interaction scenarios using the flexible and
adaptable capabilities encapsulated in the presentation. Two basic
scenarios exist that will violate this state. First, the user
requests for content data that did not play a role in the story
generation (e.g., in the personalized TV program guide,
personalized line-ups from 6:00 p.m. to 9:00 p.m. are presented,
but the user now wants to expand the window of the program guide by
looking at programs from 6:00 p.m. to 12:00 a.m.). In the second
scenario, there is a change to the content model and its elements,
requiring generation of the story by re-evaluating the narrative
and recreating the presentation (e.g., in the personalized TV
program guide, a programming change has occurred and a new show has
been scheduled). Both of these scenarios involve executing steps
5-8.
[0128] At appropriate times (e.g., overnight), the Analysis Engine
examines the interaction histories and the user models and
recomputes trends and patterns. The user models are then revised
accordingly. An architectural framework system according to the
present invention is thus self-improving, self-sustaining and
virtually perpetual.
[0129] Abstract Class Specifications
[0130] Some exemplary object models for the architectural framework
according to the present invention follow. FIG. 8 shows an
exemplary presentation object model according to the present
invention. FIG. 9 shows an exemplary object model for the
architectural framework according to the present invention. The
various boxes in the object models represent classes of objects of
the architectural framework. The following tables list the classes
along with their associated responsibilities and attributes.
1 Presentation Classes AbstractPresentation Responsibilities
Attributes Serve as prototype for a set of presentations
ConcretePresentation set of reactors Manage a set of subordinate a
ConcretePresentation presentations set of associated constraints
(rules) Add a presentation Delete a presentation
[0131]
2 ConcretePresentation Responsibilities Attributes Interface to
windowing/GUI a ConcretePresentation environment
[0132]
3 Sensor Responsibilities Attributes Reports user behavior to an
AbstractPresentation UserModelManager an Event Type Monitors for
specific events a UserModelManager
[0133]
4 Reactor Responsibilities Attributes Encapsulates an
application-specific an AbstractPresentation behavior a
ContentElement Acts as an action/command object
[0134]
5 PresentationEngine Responsibilities Attributes Creates and
displays an AbstractPresentation AbstractPresentations (top-level)
Interprets the declarative specification an Application associated
with an AbstractPresentation Reports invalidated presentations to
Application Resolves presentation's constraints and realizes
ConcretePresentations
[0135]
6 PresentationBuilder Responsibilities Attributes Stores
AbstractPresentation in an AbstractPresentation (top-level)
persistent storage (e.g., file)
[0136]
7 Content Classes ContentElement Responsibilities Attributes
Represents an application-specific object
[0137]
8 Story Classes StoryEngine Responsibilities Attributes Selects
ContentElements as specified a UserModel by Story type and filtered
by the a ContentElement (top-level) UserModel a Story Creates a
Story structure
[0138]
9 Story Responsibilities Attributes Represents a particular
narrative set of ContentElements structure,
application-specific
[0139]
10 User Modeling Classes UserModel Responsibilities Attributes
Maintain multiple personae set of personal data Add a persona set
of Persona Remove a persona Find a persona
[0140]
11 Persona Responsibilities Attributes Add a situation-action pair
set of preferences Remove situation-action pair set of
situation-action pairs (history) Find situation-action pair
[0141]
12 Community Responsibilities Attributes Add UserModel set of
UserModels Delete UserModel Construct Average UserModel Find
UserModel
[0142]
13 Society Responsibilities Attributes Add Community set of
Communities Delete Community Construct Average Community Find
Community
[0143]
14 UserModelManager Responsibilities Attributes Gateway to
AnalysisEngine, set of Models UserModelEditor, and UserModels
Requests for Sensor from PresentationEngine
[0144]
15 AnalysisEngine Responsibilities Attributes Performs
historical/trend analysis on a UserModel UserModel's histories
[0145]
16 UserModelEditor Responsibilities Attributes Presents a
UserModel's set of persona a UserModel Presents a persona
[0146]
17 Application Classes Application Responsibilities Attributes
Sequences the "tools" to create a a PresentationEngine presentation
a UserModelManager Handle application-specific events a StoryEngine
(e.g., invalidated presentations, special timers)
[0147] An exemplary representative application will be defined and
used to illustrate the capabilities of the architectural framework
according to the present invention. This representative application
is a TV program guide. The exemplary TV program guide is a
personalized program guide (PPG) that suggests TV programs that may
be of interest to the user right along side the traditional program
schedule. The following assumptions will be used: (1) the
presentation model has been previously specified and declared; (2)
the application is always up and running (i.e., 24 hrs a day); and
(3) the Analysis Engine has conducted its initial analysis of the
viewer's history.
[0148] A content model is defined in order to create the
application. FIG. 10 shows an exemplary object model of content and
story in the exemplary application. FIG. 11 shows a flowchart for
an exemplary object model for the exemplary application. In FIG. 9,
the run time representation of the overall exemplary application is
outlined. The PPG application displays a program guide that
includes three areas: the current movie playback component, a
current informational panel, and the program schedule grid.
[0149] An exemplary case that demonstrates the mechanics and
structure of the architectural framework according to the present
invention will now be presented. This exemplary case relates to
bootstrapping an application from its initial interaction with the
user modeling system and the story construction process, to the
initial presentation and event handling by the presentation engine.
Two assumptions have been made: (1) the presentation model has been
previously specified and declared; and (2) the Analysis Engine has
conducted its initial analysis of the viewer's history. FIG. 12
shows an exemplary interaction diagram for this exemplary
bootstrapping use case according to the architectural framework of
the present invention. The following activities occur during this
bootstrapping:
[0150] (1) anApplication creates a User Model Manager
(aUserModelMgr);
[0151] (2) anApplication creates a Story Engine (aStoryEngine);
[0152] (3) anApplication creates the standard Program Schedule
(aProgramSchedule) based on some initial time boundaries;
[0153] (4) aStoryEngine requests a user model based on a Name/ID
from the User Model Manager (aUserModelMgr);
[0154] (5) aStoryEngine retrieves the program schedule
(aProgramSchedule);
[0155] (6) aStoryEngine selects appropriate application content
based on the user model (aUserModel) and the input content
(aProgramSchedule);
[0156] (7) aStoryEngine generates a story based on a story template
program guide narrative (aPgmGuideNarrative);
[0157] (8) anApplication creates an instant of a Presentation
Engine (aPresentationEngine);
[0158] (9) aPresentationEngine creates an abstract presentation
(likely a series of nested presentations) by restoring the object
from persistent story, e.g., straining from a file;
[0159] (10) aPresentationEngine creates all specified interactors
for each abstract presentation. In this example, aSelectCmd
interactor.
[0160] (11) aPresentationEngine creates a grid object to aid in the
layout of the overall presentation;
[0161] (12) aPresentationEngine creates all concrete presentation
objects as declared by their corresponding abstract
presentation;
[0162] (13) aPresentationEngine resolves constraints as specified
by the display rules and application rules and reconciled with the
input content and the display context by the aPresentationEngine's
constraint solver/rule interpreter;
[0163] (14) Selective presentation can occur as a result of the
previous step. A grid consistently preserves the overall
presentation design;
[0164] (15) aPresentationEngine realizes the concrete
presentation's (aConcretePresentation) by determining its final
form including attributes and settings;
[0165] (16) aPresentationEngine displays the concrete presentation
(aConcretePresentation);
[0166] (17) aPresentationEngine notifies the application
(anApplication) of its successful initialization; and
[0167] (18) anApplication evokes aPresentationEngine's event
handling routine.
[0168] Software and Design
[0169] Another exemplary embodiment of a service is in the context
of the World Wide Web, and more specifically a corporate gateway
web site will be used to further describe the architectural
framework according to the present invention. A corporate gateway
web site may be designed to serve a company's online product and
service catalogue, customer service center, or depending on the
company's line of business, serve as a content navigator. In this
exemplary embodiment, XYZ Communications is a communications
company that has set up a web site that includes corporate product
and service information and serves as a gateway to aggregated
content (e.g., special events, community information, etc.).
[0170] The present invention uses a basic structure called a
feature-vector that consists of attribute-value pairs, e.g.,
"keyword=cooking", or "author=Smith", etc. A feature in a
feature-vector is represented by a type (e.g., keyword,
geo-location, address) where the feature type encapsulates
validation routines for authenticating the feature's data. These
routines may be utilized by meta tools such as editors to validate
the data entered at the interface.
[0171] A user model simply contains a feature-vector that is made
up of a set of weighted features. The weight designates the
relative importance of the feature with respect to a user's
apparent interest. A feature and its associated weight may be
explicitly or implicitly defined, i.e., manually set by the user,
or derived by some statistical or machine learning algorithm
analyzing a user's previous interaction history. Community models
that represent a set of users may be created by bringing together
users for different reasons (location, interest, job, or event).
Therefore, a user model may actually represent an aggregate of
several user models, each one representing a different persona,
e.g., work, home, etc. as shown in FIG. 13.
[0172] As previously discussed, a content model is required by a
content assembly engine to put together a story tailored to a
user's request and profile. This requires a content model to be
able to associate the various content elements semantically to form
a story and to associate the content with user's preferences. In
addition, a presentation generator (i.e., Presentation Engine)
needs to provide adaptive content presentation given the delivery
context, including the end user device configuration, network
bandwidth, etc. The content model should be able to offer
alternative presentations of the content for the presentation
generator to select from.
[0173] In the architectural framework according to the present
invention, a content element is defined as an object representing a
piece of information that can be presented via one or more media
types. FIG. 14 shows a diagram of semantics and content where the
semantics describe what a content element is about. The semantics
could potentially enable a content assembly engine to associate
content elements on a more semantic level. An event or item on our
exemplary web site could be represented as a content element that
is media independent, but can manifest itself in multiple forms or
representations such as a text document, an audio/video clip, or
even a multimedia presentation. For example, assuming our exemplary
web site included events such as information regarding a 1996 game,
a baseball ad, and nature ad, FIG. 15 displays how each of these
events could have multiple representations of content.
[0174] The application framework according to the present invention
uses dynamic content assembly. With this approach, the development
of an application or service is similar to the process of creating
a dynamic story or movie that can adapt to a user, the available
content, and the context at the time of delivery. The present
invention uses, among other concepts, three basic concepts in
support of dynamic content assembly: story, filter, and scene.
[0175] A filter is a construct that takes in a set of content
elements and returns a subset of the original inputs. A filter has
specific filtering semantics, e.g., a feature-based filter that
uses a feature (e.g., "keyword=television") to comb through an
input set of content elements to retrieve content elements that
match the feature. FIG. 16 shows an example of two such filters and
the results being joined by an Andfilter that "ands" the results of
two other filters. In this example, we have selected two content
elements, one selected explicitly by a content ID and the other by
filtering for advertisements that have been characterized to be
related to nature.
[0176] By chaining filters, complex filtering patterns can be
produced. A composite filter enables the creation of hierarchical
layered reusable content assembly. A scene is a composite filter
that basically corresponds to one element and a story. By
assembling a series of modular, layered scenes, we can tell a story
at a fine level of granularity tuned to the user and the delivery
context.
[0177] The architectural framework according to the present
invention uses adaptive presentation in that scenes are presented
in different ways depending on the available context of delivery
(such as available display real estate, the network connection,
etc.). To support adaptive presentation, a presentation engine may
generate presentations that take into account the context of
delivery and select appropriate media representations to show the
content element. In the present invention a template, that acts as
a proxy for a story element or scene element, is used regarding
laying out and arranging the presentation elements. A primitive
template has the responsibility of selecting the appropriate media
element. A composite template serves to support the design of
hierarchal presentations with a fine level of specification and
control. By implementing these concepts and objects, the present
invention supports the creation of custom presentation components
that are refinements of the basic presentation classes that can
render a scene to a user in the most appropriate form. In the
present invention, presentation components have the ability to
render a scene without having to change the story.
[0178] Application Subsystems
[0179] Creating an application in accordance with the present
invention involves interfacing to each subsystem's public
interface. Each subsystem's public interface is encapsulated in the
public operations of a select set of objects within each subsystem.
An application is basically the glue that brings together
invocations to the public interfaces of the subsystems as well as
to any other external subsystems, e.g., databases. FIG. 17 shows a
diagram of an anatomy of an application using the architectural
framework according to the present invention. Further, the
following pseudo-code describes the basic framework of an
application according to the present invention:
[0180] Application:
18 main ( ) umMgr = new UserModelManager storyEngine = new
StoryEngine(umMgr); storyEngine.init( ); presentationEngine = new
PresentationEngine(umMgr); presentationEngine.init( ) current_scene
= StartScene While Until exit( )
StoryEngine.assemble(current_scene) storyEvent =
PresentationEngine.present(current_scene) current_scene =
StoryEngine.dispatchEvent(storyEvent) end while end main
[0181] In summary, an application proceeds through the following
steps:
[0182] (1) Creating and initializing a UserModelManager;
[0183] (2) Creating and initializing a story engine;
[0184] (3) Creating and initializing a PresentationEngine;
[0185] (4) Selecting a story element (i.e., scene) to be the
initial element of the story;
[0186] (5) Calling upon the StoryEngine to assemble a "story" given
the initial element;
[0187] (6) Upon the StoryEngine completing its assembly task,
calling upon the PresentationEngine to present the "story";
[0188] (7) Dispatching a story-relevant event to the StoryEngine to
determine the next story element (scene) to play;
[0189] (8) Based on the outcome of the event, set the next story
element (scene) to be assembled and subsequently presented.
[0190] Referring to our exemplary web site example, the initial
story element is set to an element representing the home page of
XYZ Communications' web site. As the story plays out with user
interaction, the system proceeds through its
assemble-present-dispatch steps, a kind of dynamically generated
contextual movie. Therefore, a user could rapidly end up, for
example, on one page of the XYZ Communications web site because a
user has shown a continuing interest in the subject matter of that
page. This interest was detected because the user has had a
tendency to select information that can be described to have some
sort of connection with that subject matter. For example, if the
web page was a vegetarian page, the user may have shown interest in
eating healthier, therefore, a connection with healthy diets. The
end result is that the user would not have to wade through an
extensive set of links and/or pages on topics of no or little
interest to him or her.
[0191] Regarding the interface, in this example, the user is
interacting with a user interface or a browser, depending on the
implementation environment. Additionally, the Story Engine and the
Presentation Engine serve as single points of interface to the
story and presentation databases respectively. The User Model
Manager takes on a similar role over the database of user models by
being a gateway to any user information.
[0192] System Partitioning
[0193] Typically, an application is not just resident on one
processing element but is distributed or networked, i.e.,
distributed and partitioned in multiple elements. The following
shows embodiments of an application developed with the
architectural framework according to the present invention where
the application is partitioned across network elements.
[0194] FIG. 18 shows a network diagram of an exemplary thick
client-thin server design embodiment according to the present
invention. The client is bundled with both runtime engines (i.e.,
Story Engine, Presentation Engine) and the User Model-Manager that
interfaces to a database of user models. The story, content, and
presentation databases are remotely based. This requires the Story
Engine and the Presentation Engine to be designed to hide the
details of accessing remote databases, similar to the role of the
User Model Manager, which serves as a gateway to a repository of
user models, local or remote. Moreover, the remote databases need
to be managed by server processes that can serve multiple remote
users and provide an interface to clients for remote object
communication (i.e., sockets, Java's RMI (Remote Method
Invocation), CORBA (Common Object Request Broker Architecture),
etc.).
[0195] From the access perspective, this particular design requires
the client to be either: resident on the client's machine, or
downloaded at the point of remote access, e.g., Java applet.
[0196] FIG. 19 shows a network diagram of an exemplary thin
client-thick server design embodiment according to the present
invention. With a thin client, the majority of the application
resides on the server side. Whether the complete application
resides on the server or not depends on the implementation of the
user interface and the choice of delivery environment. Regardless,
the interface needs to have the capability to access and operate
the application by sending a request to the remote host, who in
effect acts as an application server and returns a generated
presentation of the application. For example, if the application
resided on the web server, a browser could serve as the user
interface allowing the user to request a page for presentation
(shipping along some form of identification, cookie, embedded CGI
argument, etc.). The server would then assemble and generate a
complete presentation and return HTML that would be rendered in the
browser.
[0197] In terms of access, the interface may be a generic interface
like an HTML browser which only acts as an access point and waits
for a complete server-side generated presentation to be rendered in
its native HTML. Alternatively, in a web environment once again, a
Java applet that only implements a custom user interface may be
downloaded. The applet would need to interface to the application
server via some sort of protocol so it could render a server-side
generated presentation utilizing its native Java widgets.
[0198] FIG. 20 shows a network diagram of an exemplary peer-to-peer
distributed system. Ideally, all components may be distributed
across the network in principle. For a variety of reasons, i.e.,
load balancing, low bandwidth, intermittent network connections,
efficient resource utilization, etc., situations could arise that
may warrant configuring an application in a fully distributed
architecture (e.g., CORBA, Java RMI). This partitioning implies
that the application may be reduced to interfacing to proxy clients
that do the real work of talking to their respective components. In
a truly distributed system a component may potentially take on both
roles of server and client. Regarding access, in this
configuration, the point of access is dependent on the
implementation and/or delivery environment.
[0199] Object-Oriented Base Framework Design
[0200] The following are descriptions of exemplary class diagrams
and base classes that may be used in an applications framework
according to the present invention.
[0201] User Modeling Subsystem Classes
[0202] FIG. 21 shows an exemplary user modeling class diagram
according to the present invention. The following provide
descriptions of exemplary base classes shown in FIG. 21. The user
modeling subsystem may be a collection of classes that supports the
creation and maintenance of user models (i.e., profiles).
[0203] UserModel
[0204] Description:
[0205] This class represents a user's interests through a
FeatureVector. Features are a content-independent metadata
structure that serves as a common denominator between users and
content.
[0206] Responsibilities:
[0207] Persistent representation of user interests.
[0208] Private Properties:
[0209] user_id: string=null
[0210] A string name for a UserModel.
[0211] features: FeatureVector=null
[0212] A set of features representing a user's weighted interests.
A common denominator between UserModels and ContentElements.
[0213] Public Methods:
[0214] UserModel (uid: String=null):
[0215] Public constructor parameterized for a string-based user
ID.
[0216] deleteFeature (feat: Feature=null):void
[0217] Delete a Feature from UserModel's FeatureVector.
[0218] addFeature (feat: Feature=null):void
[0219] Add a Feature to the UserModel's FeatureVector
[0220] findByType (sname: String=null): FeatureVector
[0221] Find all Features present in the UserModel's FeatureVector
of the indicated type (i.e., typename) and return the results in a
new FeatureVector.
[0222] findEntry (feat: Feature=null): Feature
[0223] Find supplied Feature in the UserModel's FeatureVector.
[0224] similarity (features: FeatureVector=null): float
[0225] Compute a numerical score indicating the degree of
similarity between a UserModel (its FeatureVector) and the supplied
FeatureVector.
[0226] read (istream: InputStream=null): void
[0227] Read a UserModel from a InputStream (language-specific).
[0228] write (ostream: OutputStream=null): void
[0229] Write a UserModel to a OutputStream (language-specific).
[0230] Community
[0231] Description:
[0232] This class represents a set of users as a community. The
inherited FeatureVector from the UserModel base class is treated as
stereotype user of the community and computed by the Community
class.
[0233] Communities can be created explicitly or implicitly.
[0234] Responsibilities:
[0235] Maintain a set of users.
[0236] Maintain a stereotype of the user.
[0237] Derived from UserModel
[0238] Private Properties:
[0239] users: UserModel=null
[0240] A set of users, i.e., UserModels.
[0241] Public Methods:
[0242] Community (id: String=null):
[0243] Public constructor parameterized for a string name.
[0244] addUser (user: UserModel=default): void
[0245] Add UserModel to set of UserModels.
[0246] deleteUser (user: UserModel=null):void
[0247] Delete UserModel from set of UserModels.
[0248] getUM (uid: String=null, umMgr:UMMgr=null):void
[0249] Retrieve a UserModel through the UMmgr and cache the
UserModel in the Community.
[0250] getAll (umMgr: UMMgr=null):
[0251] Retrieve all contained UserModels through a UMMgr and cache
them in users.
[0252] UMmgr
[0253] Description:
[0254] This class serves as interface to all UserModels and
Communities. It hides all remote access to remote models.
[0255] Responsibilities:
[0256] Maintain a global set of user models.
[0257] Access point to a all user models.
[0258] Private Properties:
[0259] hostID:String=null
[0260] String ID for remote or local host system.
[0261] baseID:String=null
[0262] String ID for root community (global community that contains
all models).
[0263] baseCommunity: Community=null
[0264] The root Community for all models.
[0265] Public Methods:
[0266] UMmgr (hostid: String=null, baseid:String=null):
[0267] Public constructor parameterized for string ID for the
remote host home to the UserModels, and string ID for the root
Community.
[0268] init( ): void
[0269] Initializes the UMmgr's internal data.
[0270] getUM (uid: String=null): UserModel
[0271] Retrieve a UserModel, transparently from a remote or local
host system.
[0272] saveUM (um: UserModel=null): void
[0273] Save the UserModel transparently to a remote or local
host.
[0274] deleteUM (um: UserModel=null): void
[0275] Delete UserModel from the pool of UserModels at a remote or
local host.
[0276] generateStereotype (cm: Community=null): UserModel
[0277] Based on a set of UserModels, generate a UserModel that
typifies a user in the given Community.
[0278] getCommunity (uid: String=null): Community
[0279] Retrieve a Community model, transparently from a remote or
local host system.
[0280] saveCommunity (cm: Community=null): void
[0281] Save the Community model transparently to a remote or local
host.
[0282] deleteCommunity (cm: Community=null): void
[0283] Delete Community model from the pool of UserModels at a
remote or local host.
[0284] UserHistory
[0285] Description:
[0286] This class represents a repository of a chronically-ordered
set of StoryEvents as a result of user interaction.
[0287] Responsibilities:
[0288] Maintain a set of StoryEvents.
[0289] Private Properties:
[0290] events:Set of StoryEvent=null
[0291] the set of StoryEvents that have occurred as a result of a
specific user's interaction.
[0292] Public Methods:
[0293] UserHistory (uid:UserModel=null):
[0294] Public constructor parameterized for a string name
indicating the ID of the UserModel.
[0295] addEntry (event:StoryEvent=null):void
[0296] Add an entry, i.e., StoryEvent, to the history.
[0297] deleteEntry (event:StoryEvent=null):void
[0298] Delete StoryEvent from the history of events.
[0299] purge (purgeDate:Date=null):void
[0300] Remove all StoryEvents from the history that occurred before
the indicated date.
[0301] read (istream:InputStream=null):void
[0302] Read UserHistory from a InputStream (language-specific).
[0303] write (ostrearm:InputStream=null):void
[0304] Write a UserHistory to a OutputStream
(language-specific).
[0305] AnalysisWorkbench
[0306] Description:
[0307] This class brings together an array of analysis tools to
update and better target UserModels and extract communities of
interest.
[0308] This component is basically a learning system.
[0309] Responsibilities:
[0310] Update UserModels based on their UserHistories
[0311] Compute stereotypical users for Community models.
[0312] Compute correlations between features in UserModels
[0313] Compute clusters of users to discover implicit communities
of interest.
[0314] Public Methods:
[0315] AnalysisWorkbench( ):
[0316] Public constructor.
[0317] computeStereotype (cm: Community=null): FeatureVector
[0318] Compute stereotypical user, i.e., a FeatureVector, for a
given Community.
[0319] reduce (history: UserHistory=null): FeatureVector
[0320] Reduce a UserHistory to a FeatureVector.
[0321] A UserHistory contains StoryEvents where some in turn carry
information as a result of the user selecting a ContentElement.
[0322] This information serves as raw data to determine the
effectiveness/relevance of a UserModel's features
[0323] cluster (users: Community=null): Community
[0324] Apply cluster analysis to a Community of users and generate
a Community of Communities, each representing a cluster.
[0325] Story Engine Subsystem Classes
[0326] FIG. 22 shows an exemplary StoryEngine class diagram
according to the present invention. The following provide
descriptions of exemplary base classes shown in FIG. 22. The Story
Engine may consist of a content assembler (the Story Engine itself)
and the databases containing data structures that specify an
application and the underlying content model that represents and
interfaces to multiple representations of multimedia content
elements.
[0327] StoryElement
[0328] Description:
[0329] StoryElement is the abstract class for all components that
makeup a story, i.e., elements to be assembled dynamically.
[0330] Responsibilities:
[0331] Abstract base class with an associated string ID.
[0332] Private Properties:
[0333] name: String=null
[0334] The name of the StoryElement.
[0335] Public Methods:
[0336] StoryElement (sname: String=null):
[0337] Public Constructor for Story Element.
[0338] getName( ): String
[0339] Return the name of StoryElement.
[0340] read (istream: InputStream=null): void
[0341] Read a StoryElement from a InputStream
(language-specific).
[0342] write (ostream: OutputStream=null): void
[0343] Write a StoryElement to OutputStream
(language-specific).
[0344] Filter
[0345] Description:
[0346] A Filter is a StoryElement that basically takes a set of
input ContentElements and outputs a subset of the ContentElements
based on the Filter's filtering semantics.
[0347] Each ContentElement that ends up in the Filter's set of
outputs potentially has a set of points of interaction called
anchors. These anchors can be activated as a result of, e.g., of
user interaction, and produce an application event called a user
selection. So, a Filter's anchors are derived from its contained
ContentElements' anchors.
[0348] Responsibilities:
[0349] Filter a input set of ContentElements based some filtering
semantics (as defined by concrete subclasses).
[0350] Handle StoryEvent dispatched from the StoryEngine.
[0351] Derived from StoryElement
[0352] Private Properties:
[0353] inputs: ContentElement=null
[0354] The set of ContentElements passed into the Filter for
evaluation. The set can be a selective set as bound by the
application or the global set, which is all existing
ContentElements in the current database.
[0355] outputs: ContentElement=null
[0356] The set of ContentElements resulting from evaluating the
Filter.
[0357] maxOutputs: Integer=-1
[0358] Indicates the maximum number of ContentElements stored in
the outputs upon evaluating the Filter.
[0359] anchors: Set of Anchors=null
[0360] The set of anchors associated with the ContentElements
enumerated in the outputs.
[0361] Public Methods:
[0362] Filter (sname: String=null)
[0363] Public constructor for Filter parameterized for the string
name.
[0364] addInput (iElement: ContentElement=null):
[0365] Add a ContentElement to the set of inputs.
[0366] addInputs (iElements: Set of ContentElement=default):
void
[0367] Add a set of inputs to the set of inputs.
[0368] getOutput (i:int=0): ContentElement
[0369] Retrieve the ith ContentElement from the set of outputs.
[0370] getOutputs( ): Set of ContentElements
[0371] Retrieve the Filter's complete set of Outputs.
[0372] getElementNames( ): Set of String[ ]
[0373] Get an array of ContentElement names.
[0374] setMaxOutputs (max: Integer=-1): void
[0375] Set the maximum of ContentElements cached in the
outputs.
[0376] apply( ):boolean
[0377] Apply the Filter's filtering semantics and place results in
the outputs.
[0378] handleEvent (event: StoryEvent=null): boolean
[0379] This operations handles any Application-level event that has
been dispatched by the StoryEngine.
[0380] Private Methods:
[0381] addOutput (ce: ContentElement=null): void
[0382] Internal operation for adding a ContentElement to the set of
the outputs.
[0383] removeOutput (element: ContentElement=null): void
[0384] Internal operation for removing a ContentElement from the
set of outputs.
[0385] Collection Filter
[0386] Description
[0387] This type of Filter serves as the base class for all
collection-oriented filters. Specialized classes of
CollectionFilter define specific filtering semantics.
[0388] Responsibilities:
[0389] Operate over a set of contained Filters.
[0390] Derived from Filter
[0391] Private Properties:
[0392] subFilters:
[0393] The collection of contained Filters.
[0394] Public Methods:
[0395] CollectionFilter (sname: String=null):
[0396] Public constructor parameterized for string name.
[0397] addFilter (filter: Filter=null): void
[0398] Add a Filter to the collection of Filters, i.e.,
subFilters.
[0399] removeFilter (filter: Filter=null): void
[0400] Remove a Filter from collection of Filters, i.e.,
subFilters.
[0401] getFilter (i: integer=-1): Filter
[0402] Get the ith filter in the subFilter set.
[0403] FeatureFilter
[0404] Description:
[0405] This class is a Filter whose filtering semantics is to
filter the input set of ContentElements based on the supplied
Feature. The resulting set of matches is stored in the outputs.
[0406] The maximum number of matches is set in maxOutputs.
[0407] Responsibilities:
[0408] Filter a set of ContentElements using a Feature.
[0409] Derived from Filter
[0410] Private Properties:
[0411] feature: Feature=null
[0412] Filtering pattern.
[0413] Public Methods:
[0414] FeatureFilter (sname:String=null, feat:Feature=null):
[0415] Public constructor parameterized for string name and a
Feature. setFeature (f:Feature=null):void
[0416] Set the Filter's feature to use as a filter pattern.
[0417] UserFilter
[0418] Description:
[0419] This Filter interfaces to the current user's user model (as
specified by the StoryEngine). It utilizes the user model's feature
vector to filter content elements from the input
ContentElements.
[0420] Responsibilities:
[0421] Filter a input set of ContentElements using a UserModel.
[0422] Derived from Filter
[0423] Private Properties:
[0424] user: UserModel=null
[0425] References current user model.
[0426] Public Methods:
[0427] UserFilter (sname: String=null, um: UserModel=null):
[0428] Public constructor parameterized for string name and a
UserModel.
[0429] setUserModel (um: UserModel=null):
[0430] Set the UserModel for the Filter.
[0431] RuleFilter
[0432] Description:
[0433] This class is a rule-based filter that applies a predicate
operation, that in turn applies either a THEN or ELSE filter.
[0434] Responsibilities:
[0435] Apply a predicate operation and branch to one of two
filters.
[0436] Derived from Filter
[0437] Private Properties:
[0438] thenFilter: Filter=null
[0439] If predicate results in true, apply thenFilter.
[0440] elseFilter: Filter=null
[0441] If predicate results in false, apply the elseFilter.
[0442] Public Methods:
[0443] RuleFilter (sname:String=null, tFilter: Filter=null,
eFilter: Filter=null):
[0444] Public constructor parameterized for a string name, a THEN
filter, and a ELSE filter.
[0445] setTHEN (filter: Filter=null):
[0446] Set the THEN filter.
[0447] setELSE (f: Filter=default):
[0448] Set the ELSE filter of the RuleFilter.
[0449] predicate( ): boolean
[0450] This operation executes its code and returns a boolean
result. This operation needs to be redefined by concrete
subclasses.
[0451] Scene
[0452] Description:
[0453] This is a composite Filter composed of other Filters. This
Filter provides the capability to construct hierarchically layered
set of Filters and their associated
[0454] ContentElements.
[0455] Evaluating a Scene results in its outputs residing in the
outputs of the contained filters. This is one of the main
differences between a Scene and other CollectionFilters.
[0456] The second main difference is that a Scene is the only
presentable Filter. In order for any Filter to be presented, it
must be embedded in a Scene.
[0457] This Filter supports the abstraction of services,
presentations.
[0458] Responsibilities:
[0459] Aggregates a set of filters.
[0460] Interface for presenting a story.
[0461] Derived from CollectionFilter
[0462] Private Properties:
[0463] presentation: Presentation=null
[0464] The presentation responsible for rendering the Scene.
[0465] Public Methods:
[0466] Scene (sname: String=null):
[0467] Public constructor parameterized for string name.
[0468] setPresentation (p: Presentation=null): void
[0469] Set the dependent Presentation of this Scene.
[0470] AndFilter
[0471] Description:
[0472] The AndFilter is basically a union set operator. It takes 2
or more filters. The combined results of the evaluation of this
Filter are stored in its outputs.
[0473] Responsibilities:
[0474] ANDing the results of 2 or more contained Filters.
[0475] Derived from CollectionFilter
[0476] Public Methods:
[0477] AndFilter (sname: String=null):
[0478] Public constructor parameterized for a string name.
[0479] OrFilter
[0480] Description:
[0481] The OrFilter is basically a union set operator. It takes 2
or more filters. The combined results of the evaluation of this
Filter are stored in its outputs.
[0482] Responsibilities:
[0483] ORing the results of the 2 or more contained Filters.
[0484] Derived from CollectionFilter
[0485] Public Methods:
[0486] OrFilter (sname: String=null)
[0487] Public constructor parameterized for a string name.
[0488] TemporalScene
[0489] Description:
[0490] This Scene has a special capability to sequence its
contained Filters' associated ContentElements, i.e., outputs. By
defining a playout duration, each contained Filter's
ContentElements will be presented one at a time. Optionally, the
temporal playout can be repeated for a specified number of
times.
[0491] This requires that a Presentation be able launch a timer
that ultimately returns a TimeoutEvent to this Scene via the
StoryEngine.
[0492] Responsibilities:
[0493] Specifies a temporal playout of the resulting set of
filtered ContentElements.
[0494] Derived from Scene
[0495] Private Properties:
[0496] duration: TimeUnit=null
[0497] The duration of the Scene.
[0498] repeating: boolean
[0499] Indicate if Scene will repeat its temporal playout.
[0500] numReps: integer=-1
[0501] Indicates the number of repetitions of the playout.
[0502] Public Methods:
[0503] TemporalScene (sname: String=null, interval: TimeUnit=null,
repeat: boolean=false, numreps: integer=-1):
[0504] Public constructor parameterized for time interval,
indication if allowing repetitions, and the number of
repetitions.
[0505] setDuration (time: TimeUnit=null):void
[0506] Set the duration of playout for each contained
ContentElement.
[0507] repeating (flag: boolean=null): void
[0508] Indicates if the temporal playout will be repeating.
[0509] setReps (reps: integer=-1): void
[0510] Set the number of repetitions of the temporal playout.
[0511] StoryEvent
[0512] Description:
[0513] This class represents events of interests to StoryElements.
The PresentationEngine is responsible for listening for events. Any
StoryEvents are forwarded to the StoryEngine and ultimately to the
Scene and the appropriate sub-components.
[0514] Responsibilities:
[0515] Represent the event of user or story action (timer) that
occurred on a ContentElement.
[0516] Private Properties:
[0517] timestamp: Timestamp=null
[0518] Indicates the time of event occurrence.
[0519] Public Methods:
[0520] StoryEvent (tstamp: TimeStamp=null):
[0521] Public constructor parameterized for timestamp.
[0522] UserSelectionEvent
[0523] Description:
[0524] This event occurs when a user activates an Anchor.
[0525] Responsibilities:
[0526] Represent a user action of selection.
[0527] Derived from StoryEvent
[0528] Private Properties:
[0529] anchor: Anchor=null
[0530] Indicates selected Anchor.
[0531] Public Methods:
[0532] UserSelectionEvent (a: Anchor=null)
[0533] Public constructor parameterized for a Anchor.
[0534] getAnchor( ): Anchor
[0535] Retrieve Anchor object associated with this event.
[0536] TimeoutEvent
[0537] Description:
[0538] This event occurs when a timer has expired. A timer is
called for by a TemporalFilter and is realized by a
Presentation-side Timer object.
[0539] Responsibilities:
[0540] Represent a timeout event for a ContentElement.
[0541] Derived from StoryEvent
[0542] Private Properties:
[0543] scene:TemporalScene=null
[0544] Indicates the original TemporalScene that initiated the
timer request that has now expired.
[0545] Public Methods:
[0546] TimeoutEvent (f:Filter=null):
[0547] Public constructor parameterized for a filter (i.e.,
TemporalScene).
[0548] StoryEngine
[0549] Description:
[0550] This class is the system-level interface to the story
subsystem.
[0551] Responsibilities:
[0552] 1. Execution engine for assembling StoryElements based on a
Story Model.
[0553] 2. Maintains history of played Filters.
[0554] 3. Tracks current Filter.
[0555] 4. Interfaces to the UserModeling system.
[0556] 5. The StoryEngine also interfaces to the global pool of
ContentElements. It supplies these elements by default to the
Filters' inputs.
[0557] Private Properties:
[0558] currentFilter: Filter=null
[0559] Indicates currently executing Filter.
[0560] umMgr: UMMgr=null
[0561] Access to a UMMgr that interface to the UserModel pool.
[0562] playhistory: Set of Filter=null
[0563] The set of Filters played out during the session.
[0564] Public Methods:
[0565] StoryEngine (umMgr: UMMgr=null):
[0566] Public Constructor parameterized for a UMMgr.
[0567] assemble (scene: Scene=null): boolean
[0568] Startup the composition of the current Scene. It in turn
calls the Scene's evaluate operation that triggers the Scene
recursively to trigger the evaluation of its contained Filters.
[0569] dispatchevent (event: StoryEvent=null): Scene
[0570] Dispatch the StoryEvent, originally forwarded by the
PresentationEngine, to the current Scene. This operation calls
Scene's handleEvent operation.
[0571] This operation ultimately returns a new Scene that the
StoryEngine executes to continue the playout of the story.
[0572] init( ): boolean
[0573] Initialize StoryEngine including:
[0574] Load the "story database"
[0575] Retrieve a UserModel from UserModelManger.
[0576] Anchor
[0577] Description:
[0578] This class is representing an anchor that links to a source
Filters and sink Filter. The location of anchor is set in the
sourceFilter attribute. Its destination is set in the
destinationFilter attribute.
[0579] Most importantly, the destination can be determined at
playout time, i.e., run-time.
[0580] Responsibilities:
[0581] Maintain a link between two Filters
[0582] Private Properties:
[0583] sourceFilter: Filter=null
[0584] This is the source Filter of the Anchor.
[0585] destinationFilter: Filter=null
[0586] This is the sink Filter for the Anchor.
[0587] Public Methods:
[0588] Anchor (anchorName: String=null):
[0589] Public constructor parameterized for string name.
[0590] Anchor (anchorName: String=null, srcFilter: Filter=null,
dstFilter: Filter=null):
[0591] Public constructor parameterized for a string name, a source
Filter, and a destination Filter.
[0592] setSource (source: Filter=null): void
[0593] Set the source Filter of the Anchor.
[0594] setDestination (dest: Filter=null): void
[0595] Set the destination Filter of the Anchor.
[0596] getSource( ): Filter
[0597] Get the source Filter.
[0598] getdestination( ): Filter
[0599] Get destination Filter.
[0600] read (istream: InputStream=null): void
[0601] Read an Anchor from a InputStream (language-specific).
[0602] write (ostream: OutputStream=null): void
[0603] Write a Anchor to a OutputStream (language-specific).
[0604] Presentation Engine Subsystem Classes
[0605] FIG. 23 shows an exemplary PresentationEngine class diagram
according to the present invention. The following provide
descriptions of exemplary base classes shown in FIG. 23. The
Presentation Engine may consist of a presentation generator, and a
library of presentation components that may be matched up with the
corresponding application elements (i.e., story elements) that will
compute the final presentation form of the content elements.
[0606] Template
[0607] Description:
[0608] This is an abstract class that maps to a StoryElement. This
class needs to be specialized to define appropriate presentation
properties in accordance with the target delivery platform (e.g.,
HTMLTemplate).
[0609] Responsibilities:
[0610] The primary responsibility of Template is to determine which
representation (ContentMediaElement) of the associated
ContentElement to render.
[0611] Private Properties:
[0612] contentelement: ContentElement=null
[0613] The ContenetElement to be presented/rendered.
[0614] currentRepresentation: ContentMediaElement=null
[0615] Currently selected representation of the associated
[0616] ContentElement.
[0617] name: String=null
[0618] string-based identification of the Template.
[0619] Public Methods:
[0620] Template (contentElement: ContentElement=null, context:
PresentationContext=null):
[0621] Public constructor parameterized for its associated
StoryElement and a PresentationContext.
[0622] initialize( ): void
[0623] Initialization sets the Template ready for generation.
[0624] A Template can be called upon successively to regenerate
itself and select an alternative ContentRepresentation.
[0625] render( ): void
[0626] Format or display the final form of the Template to the
target environment.
[0627] select (context: PresentationContext=null):
ContentRepresentation
[0628] Heuristic-based selection of a ContentRepresentation from
the Template's associated ContentElement's pool of representations
Selection based on original design intent and the
PresentationContext.
[0629] generate( ): boolean
[0630] Top-level operation to generate a candidate form of the
Template. This operation calls upon select( ). Returns true if
successful.
[0631] evaluate( ): boolean
[0632] Given the current PresentationContext, this operation
evaluates the Template in its candidate form to determine if its
acceptable.
[0633] read (istream: InputStream=null): void
[0634] Read a Template from InputStream (language-specific).
[0635] write (ostream: OutputStream=null): void
[0636] Write a Template to OutputStream (language-specific).
[0637] CompositeTemplate
[0638] Description:
[0639] This class is an aggregate that maintains and represents a
set of Templates. This class enables hierarchical-structured,
recursive, presentations. This class typically maps to a Scene in
the StoryElement domain. CompositeTemplate redefines render,
select, generate, and evaluate operations.
[0640] Responsibilities:
[0641] Represent and manage the final form of the contained
Templates. Calls upon contained Templates to iteratively generate
their final form to satisfy the design intent and constraints of
the CompositeTemplate.
[0642] Working with a LayoutElement computes candidate layout of
the contained parts.
[0643] Key Capability:
[0644] Embodied with "smarts" to join or split contained original
set of Templates depending on the design intent of a subclass. This
capability is a cooperative process with a LayoutElement who is a
spatial layout expert.
[0645] Derived from Template
[0646] Private Properties:
[0647] subTemplates: Template=null
[0648] The set of contained Templates that a CompositeTemplates
manages.
[0649] scene: Scene=null
[0650] Associated Scene object.
[0651] Public Methods:
[0652] CompositeTemplate (scene: Scene=null, name: String=null,
context: PresentationContext=null):
[0653] Public constructor parameterized for a Scene, string name,
and a PresentationContext.
[0654] layout (layoutElement: LayoutElement=null): boolean
[0655] Computes the layout of its contained templates in
cooperation with a LayoutElement. The CompositeTemplate delegates
to a LayoutElement the abstract task of computing a
constraint-based layout, i.e., determines how to glue the content
elements together, while the CompositeTemplate has the specific
task of dictating a specified design style.
[0656] addTemplate (tmpl: Template=null): void
[0657] Add a Template to set of subTemplates.
[0658] deleteTemplate (tmpl: Template=null):
[0659] Delete a Template from the set of SubTemplates.
[0660] Presentation
[0661] Description:
[0662] This class encapsulates the rendering of a presentation of a
Scene.
[0663] Responsibilities:
[0664] The primary responsibility of the Presentation class is to
create the corresponding presentation object hierarchy that map the
hierarchical structure of a Scene object
[0665] Private Properties:
[0666] rootScene: Scene=null
[0667] The top-level scene associated with a Presentation.
[0668] rootTemplate: CompositeTemplate=null
[0669] The top-level CompositeTemplate associated with the
rootScene.
[0670] Public Methods:
[0671] Presentation (scene: Scene=null, context:
PresentationContext=null, id: String=null):
[0672] Public constructor parameterized for a Scene, a name, and a
PresentationContext.
[0673] map (scene: Scene=null): CompositeTemplate
[0674] This operation basically constructs a tree comprised of
templates that map to each StoryElement contained in a Scene and
its subcomponents.
[0675] This operation returns the root CompositeTemplate that maps
to the root Scene.
[0676] render( ): void
[0677] This operation in turn calls render( ) on its contained
Templates.
[0678] generate( ): boolean
[0679] This operation in turn calls generate( ) on all contained
templates to launch the generation of a Presentation.
[0680] Timer
[0681] Description:
[0682] This class presents a timer entity for showing a
Presentation for a specific interval of time. A timeout event is
spawned when a Timer has expires.
[0683] Responsibilities:
[0684] Represent a timer that spawns a timeout event.
[0685] Private Properties:
[0686] duration: TimeUnit=null
[0687] Duration of timer
[0688] Public Methods:
[0689] Timer (presentation: Presentation=null, interval:
TimeUnit=null):
[0690] Public constructor parameterized for a Presentation and a
length of duration.
[0691] setDuration (interval: TimeUnit=null): void
[0692] Set the duration of the Timer.
[0693] LayoutElement
[0694] Description:
[0695] This is an abstract base class that is intended to
coordinate the arrangement of the elements that makeup a
CompositeTemplate.
[0696] Concrete subclasses need to define the appropriate
operations and attributes dependent on the specific delivery
environment (e.g., HTML, X Windows, set-top, etc.).
[0697] Responsibilities:
[0698] Spatially arrange a CompositeTemplate's elements.
[0699] Private Properties:
[0700] cTemplate: CompositeTemplate=null
[0701] the CompositeTemplate whose elements are being arranged.
[0702] pContext: PresentationContext=null
[0703] current PresentationContext.
[0704] Public Methods:
[0705] LayoutElement (compositeTemplate:
CompositeTemplate=null):
[0706] Public constructor parameterized for a CompositeTemplate and
a PresentationContext.
[0707] arrange (cTmpl: CompositeTemplate=null): boolean
[0708] This operation computes the spatial arrangement of a
CompositeTemplate's elements.
[0709] PresentationEngine
[0710] Description:
[0711] The PresentationEngine is the system-level interface to the
presentation system.
[0712] Responsibilities:
[0713] Determine PresentationContext.
[0714] Find most appropriate matching Presentation for the incoming
Scene.
[0715] Hand off a StoryEvents back to the Application.
[0716] Private Properties:
[0717] presentationContext: PresentationContext=null
[0718] The current PresentationContext for the given
Presentation.
[0719] presentation: Presentation=null
[0720] The current Presentation being generated/presented.
[0721] Public Methods:
[0722] PresentationEngine (userModelMgr: UMMgr=null):
[0723] Public constructor parameterized for a UserModelManager.
[0724] init( ): void
[0725] This operation loads the presentation database.
[0726] present( ): StoryEvent
[0727] This operation includes the following steps:
[0728] 1. Invoke Scene lookup operation.
[0729] 2. Generate the currentPresentation.
[0730] 3. Wait for a StoryEvent and return it.
[0731] lookup (scene: Scene=null): Presentation
[0732] This operation attempts to match the scene from the
StoryEngine to a corresponding Presentation that will have the most
appropriate mapping to the Scene components.
[0733] handleEvent( ): StoryEvent
[0734] This operation waits for a StoryEvent to be detected and
subsequently returned to the application.
[0735] PresentationContext
[0736] Description:
[0737] This class represents the current snapshot of the delivery
environment at any given moment during the generation of a
Presentation. This component is like the UserModel is to the user's
profile, as a PresentationContext is to a profile of the
presentation environment.
[0738] Responsibilities:
[0739] Maintain a collection of attribute-value pairs that describe
the delivery environment.
[0740] Private Properties:
[0741] featureVector: FeatureVector
[0742] Public Methods:
[0743] PresentationContext (id: String=null):
[0744] Public constructor parameterized for a string name.
[0745] Content Model Classes
[0746] FIG. 24 shows an exemplary content class diagram according
to the present invention. The following provide descriptions of
exemplary base classes shown in FIG. 24.
[0747] ContentElement
[0748] Description:
[0749] ContentElement is the root class of the content model
hierarchy. This class abstracts an element of content and maintains
a set of ContentMediaElements, where each ContentMediaElement is a
different representation of the ContentElement (e.g. text, image,
etc.)
[0750] Responsibilities:
[0751] Multi-model representation of a element of content.
[0752] Integrate multiple representations of a element of
content.
[0753] Private Properties:
[0754] name: String=null
[0755] the name of the ContentElement.
[0756] keywords: FeatureVector=null
[0757] utilizing feature vector to represent the semantic
meaning.
[0758] representations: Set of ContentMediaElement=null
[0759] different media representations of the ContentElement.
[0760] anchors: Set of Anchor=null
[0761] Public Methods:
[0762] ContentElement (cname: String=null):
[0763] Constructor 1
[0764] ContentElement (cname: String=null, keys: String[
]=null):
[0765] Constructor 2
[0766] setKeywords (keys: String=null): void
[0767] Set the keywords of this ContentElement
[0768] getKeywords( ): FeatureVector
[0769] Get the feature vector of the ContentElement.
[0770] addMediaElement (cme: ContentMediaElement=null): void
[0771] Add a ContentMedialElement the ContentElement's set of media
elements.
[0772] removeMediaElement (cme: ContentMediaElement=null): void
[0773] Remove the given ContentMediaElement from the
ContentElement's set of media elements.
[0774] getName( ): String
[0775] Return the ContentElement's name.
[0776] read (istream: InputStream=null): void
[0777] Read a ContentElement from a InputStream
(language-specific).
[0778] write (ostream: OutputStream=null): void
[0779] Write a ContentElement to a OutputStream
(language-specific).
[0780] getMedia (i: integer=-1): ContentMediaElement
[0781] Get the ith ContentMediaElement contained in the set of
representations.
[0782] CompositeContent
[0783] Description:
[0784] This class supports the creation of an aggregation of
ContentElements.
[0785] Responsibilities:
[0786] Maintaining a set of ContentElement.
[0787] Derived from ContentElement
[0788] Private Properties:
[0789] components: Set of ContentElement=null
[0790] Public Methods:
[0791] CompositeContent (cname: String=null, keys: String[ ]=null,
cmpnts: Set of ContentElement=null):
[0792] Public constructor parameterized for a string name, a set of
keywords, and a set of subcomponents.
[0793] addComponent (c: ContentElement=null): void
[0794] Add a ContentElement to this CompositeContent's set of
components.
[0795] getComponent (i: int=0): Content
[0796] Get the ith ContentElement contained in the component.
[0797] removeComponent (cmpnt: ContentElement=null): void
[0798] Remove a component from the components list.
[0799] getComponentNames( ): String[ ]
[0800] Get an array of component content names
[0801] ContentMediaElement
[0802] Description:
[0803] This is a virtual class that defines general attributes and
operations for ContentMediaElements. It will be implemented in
Audio, Video, Image and Text subclasses.
[0804] This class basically acts as a wrapper class to media
assets, hiding the details of the raw media.
[0805] Responsibilities:
[0806] Representation of a media asset (e.g., image, video segment,
text segment).
[0807] Private Properties:
[0808] name: String=null
[0809] The name of the presentation
[0810] author: String=null
[0811] The author of the presentation
[0812] anchors: Set of Anchor=null
[0813] The set of associated anchors
[0814] Public Methods:
[0815] ContentMediaElement (cmeName: String, cmeAuthor:
String):
[0816] Public constructor parameterized for a string name and
string author's name.
[0817] show( ): void
[0818] Show the ContentMediaElement.
[0819] Metadata Model Classes
[0820] FIG. 25 shows an exemplary metadata class diagram according
to the present invention. The following provide descriptions of
exemplary base classes shown in FIG. 25.
[0821] FeatureType
[0822] Description:
[0823] This is the abstract base class for all FeatureTypes.
FeatureType is a wrapper class that encapsulates one or more
primitive datatypes that collectively provide more meaning.
[0824] For example, a FeatureType called homeLocation comprised of
4 strings that represent street address, city, state, and country
of a user, or geoLocation comprised of two real numbers that
represent the latitude and longitude of a location.
[0825] Responsibilities:
[0826] Abstract base class for types of features.
[0827] Private Properties:
[0828] typeName: String=null
[0829] The string name of this FeatureType.
[0830] Public Methods:
[0831] FeatureType (sname: String=null):
[0832] Public constructor parameterized for a string name.
[0833] equals (object: Object=null): boolean
[0834] Determines if object is of a specific FeatureType.
[0835] validate( ): boolean
[0836] Determines if the FeatureType associated data is valid,
e.g., string within length bounds.
[0837] read (istream: InputStream=null): void
[0838] Read a FeatureType from a InputStream
(language-specific).
[0839] write (ostream: OutputStream=null): void
[0840] Write a FeatureType to OutputStream (language-specific).
[0841] Feature
[0842] Description:
[0843] This class represents a weighted data, more specifically, a
weighted FeatureType instance. The assumption is that the "data" is
rated in terms of importance on a real number scale from 0.0 to
1.0.
[0844] Responsibilities:
[0845] Represent a weighted attribute of interest.
[0846] Private Properties:
[0847] weight: =-1
[0848] The weight of importance/priority on a real number scale
from 0.0. to 1.0.
[0849] data: FeatureType=null
[0850] The FeatureType instance that comprises the Feature.
[0851] Public Methods:
[0852] Feature (data: FeatureType=null, wt: float=null):
[0853] Public constructor parameterized for a datum and a
weight.
[0854] setWeight (wt: float=-1): void
[0855] Set the weight of the Feature.
[0856] getWeight( ): float
[0857] Get the feature's weight.
[0858] getData( ): Object
[0859] Get encapsulated FeatureType's data.
[0860] read (istream: InputStream=null): void
[0861] Read a Feature from a InputStream (language-specific).
[0862] write (ostream: OutputStream=null): void
[0863] Write a Feature to a OutputStream (language-specific).
[0864] FeatureVector
[0865] Description:
[0866] This class is a set of weighted Features. This class is a
basic data structure for representing metadata.
[0867] Responsibilities:
[0868] Maintains a set of features.
[0869] Private Properties:
[0870] features: Set of Feature=null
[0871] A set of features.
[0872] Public Methods:
[0873] FeatureVector( ):
[0874] Public constructor.
[0875] addEntry (feat: Feature=null): void
[0876] Add a Feature to the FeatureVector's set of Features.
[0877] deleteEntry (feat: Feature=null): void
[0878] Delete Feature from FeatureVector's set of Features.
[0879] findEntry (feat: Feature=null): Feature
[0880] Find the Feature in the FeatureVector's set of Features.
[0881] similarity (feat: Feature=null): float
[0882] Compute numerical score indicating how similar/dissimilar
two FeatureVectors are.
[0883] read (istream: InputStream=null): void
[0884] Read a FeatureVector from a InputStream
(language-specific).
[0885] write (ostream: OutputStream=null): void
[0886] Write a FeatureVector to a OutputStream
(language-specific).
[0887] Functional Design
[0888] Now the corporate web site exemplary embodiment will be used
to discuss the functional aspects of creating, assembling and
presenting an application using the architectural framework
according to the present invention.
[0889] FIG. 26 shows an exemplary content database for the
corporate web site according to the present invention. From this
sparse set of content, a simple portion of a web service will be
designed. The database contains content elements with varying
representation. In some cases, the content element has multiple
representations of the same type but with different media
characteristics, e.g., the Nature Conservation Ad has two image
representations but with differing specification for aspect ratio.
The more varied the database in terms of types of representations
and the number of versions of the same type of representation, the
more contextual delivery of content for the user.
[0890] A story model will be described by using the exemplary web
base service to show how a hierarchical organization of filters
creates modular highly complex applications that are assembled
dynamically, shaped by the characteristics of the current user and
the available content.
[0891] FIG. 27 shows a block diagram of a high level view of a
portion of the exemplary web base service. This portion focuses on
the element of the story that presents information on the history
of the Cedar Fever Bowl. The history of the bowl game goes back to
1995 and the intent of an imaginary web development team is to show
a recap of each game (1995, 1996) and associate an advertisement
alongside each game history recap. The development team develops a
story model that further expands the block diagram shown in FIG.
27. Each game review is developed as a separate component and,
therefore, each element is a self-contained independent aggregate
of information. The architectural framework according to the
present invention supports the development of bottom up services
and reusable story elements. The 1995 game review will consist of a
summary of the 1995 game and an advertisement that best matches the
current user's user model. The 1996 game review will consist of a
summary of the 1996 game and an advertisement that best matches the
current user's user model but that has not been previously reviewed
by the user in the current session, otherwise, an advertisement
that simply is sports related.
[0892] FIG. 28 shows the resulting story model. By using
FeatureFilters, one can select specific content elements by
referring to their content_id. UserFilters filter content elements
that match a user's user model, returning a prioritized set of
content elements sorted by one level of similarity. Since a story
model is developed, the next step is to develop or reuse
presentation components that will be designated to present the
simple story.
[0893] In order to illustrate the presentation aspects of the web
service example, a set of presentation templates that encapsulate
the HTML delivery environment are developed. In the presentation
domain, there are two types of presentation templates, a Template
and a CompositeTemplate. A CompositeTemplate represents a set of
Templates. Non-composite Templates are mapped to ContentElements.
Composite Templates are mapped to Scenes (a composite
ContentElement). In the creation of a presentation, Templates call
upon their associated ContentElements and retrieve the best
representation of the element in the context of the current
delivery environment. A CompositeTemplate ensures that given its
real estate its subcomponents are intelligently laid out with the
best-suited media representation of a ContentElement
(ContentMediaElement). To further illustrate, an example set of
presentation components are shown in FIG. 29. These example
components are not fully specified, but they illustrate what is
expected of a presentation component.
[0894] In this example presentation model, HTMLDoc, HTMLPage, and
HTMLBody are CompositeTemplates, while HTMLBlock and HTMLAd are
non-composite Templates. The semantics of the components are
loosely the following: (1) a HTMLBodyWithAd will always require 1
HTMLBlock and 1 HTMLAd; (2) A HTMLPage can contain 1 or more
components of type HTMLBody; and (3) by default, a HTMLDoc contains
one HTMLPage. Additionally, a HTMLDoc maps to one Scene object.
More importantly, if a HTMLDoc determines that one HTMLPage is
insufficient to present a Scene, it may for example, dynamically
allocate two HTMLPages and mapping to one StoryElement. This last
point demonstrates the power and flexibility of the architectural
framework according to the present invention, if designed and
implemented correctly.
[0895] Generally the execution process from a high level view for
assembling a story, generating a presentation that shows the story,
and handling any user events includes:
[0896] Creating and initializing a UserModelManager;
[0897] Creating and initializing a StoryEngine;
[0898] Creating and initializing a PresentationEngine;
[0899] Selecting an story element (i.e., Scene) to be the initial
element of the story;
[0900] Calling upon the StoryEngine to assemble a "story" given the
initial element;
[0901] Upon the StoryEngine completing its assembly task, calling
upon the PresentationEngine to present the "story";
[0902] Dispatching a story-relevant event to the StoryEngine to
determine the next story element (Scene) to play; and
[0903] Based on the outcome of the event, set the next story
element (Scene) to be assembled and subsequently presented.
[0904] The following illustrates the assembling of a scene for the
exemplary web service example, the 1996Game, contained in the
BowlHistory Scene. Initially, it is assumed that the Story Model at
this point has been constructed and mapped with the appropriate
ContentElements. The StoryEngine starts off the assembly of the
1996Game Scene. Each successive Scene calls upon their contained
Filters resulting in a depth-first traversal of the filter
hierarchy. Each Scene supplies its inputs to its contained Filters.
This is the default execution behavior, which may be overridden by
the application designers implementing their own base
CompositeTemplate class that redefines the execution semantics.
[0905] Specifically, the 1996GameReview, a FeatureFilter, is
calling for a content element with the feature,
"content_id=1996GameReview" which is an explicit call for a
specific ContentElement. Next, the element named
PersonalizedAd.sub.--2, an AndFilter, is retrieving advertisements
that have the feature "content_type=Ad" and that favorably match
the user's UserModel. Having met these constraints, the last
Filter, a RuleFilter, checks to see if the ContentElements that
have resulted from the two previous Filters (within the AndFilter)
are on the StoryEngine's already-played list. If all have been
"played", one content element from that current set that has the
feature is selected, "keyword=sports", otherwise any one
ContentElement from the current set is chosen.
[0906] The AndFilter feeds the set of ContentElements resulting
from each contained Filter to the next, which differs from the
Scene Filter that simply supplies the same inputs to each contained
Filter. Additionally, when the UserFilter is executed, it retrieves
the user's UserModel via the UMMgr (user model manager) to carry
out its execution.
[0907] To generate a final presentation of a Scene, it is assumed
that the process of mapping the story elements to the presentation
elements has already occurred and its outcome is shown partially in
FIG. 30. The process of rendering (actually displaying the
presentation) is not shown or described.
[0908] In generating a presentation, we have a hierarchical
structure that maps to the hierarchical structure in the
StoryModel. Once again, the structure is traversed in a depth-first
manner. Each non-composite template (leaf element in the hierarchy)
selects one ContentMediaElement object that represents the
template's associated ContentElement. Once a CompositeTemplate's
sub-components have satisfactorily selected their
ContentMediaElements, the CompositeTemplate calls upon a
LayoutElement to arrange the layout of these sub-components. Once a
layout is generated, the CompositeTemplate evaluates the candidate
presentation based on its criteria as defined in a concrete class.
If satisfied, control is passed back to its containing template,
and the whole process starts all over again for the siblings in the
hierarchy. Eventually, control returns to the top-level, and if all
else evaluated satisfactorily, the overall presentation is ready to
be rendered.
[0909] In general, the rendering process simply involves traversing
the presentation hierarchy and invoking the show( ) operation on
the finally selected ContentMediaElement, and displaying the
hierarchy as specified by its CompositeTemplate and its
LayoutElement.
[0910] If any step in the previously described process of
evaluation fails, i.e., a CompositeTemplate is not satisfied with
the selection of the media representations and/or the layout of its
subcomponents, the CompositeTemplate then pushes back on the
contained templates to choose alternate media representations of
its associated ContentElement. This process involves an iterative
generation of the final form of the presentation. The final
presentation form is shown in FIG. 31.
[0911] To dispatch a Story-specific event, it is assumed that the
presentation of a story (i.e., Scene) has been successfully
rendered by this point. As previously described, ContentElements
have associated anchors that surface on a ContentElement because of
their relationship with ContentMediaElements. When an event occurs
in the PresentationEngine that has relevance to the StoryEngine
(i.e., a UserSelectionEvent, a TimeoutEvent), the event is
forwarded to the current Scene. The Scene object decodes the event
to extract the next Scene to be assembled and presented and whole
process starts over.
[0912] It is noted that the foregoing examples have been provided
merely for the purpose of explanation and are in no way to be
construed as limiting of the present invention. While the present
invention has been described with reference to a preferred
embodiment, it is understood that the words which have been used
herein are words of description and illustration, rather than words
of limitation. Changes may be made within the purview of the
appended claims, as presently stated and as amended, without
departing from the scope and spirit of the present invention in its
aspects. Although the present invention has been described herein
with reference to particular means, materials, and embodiments, the
present invention is not intended to be limited to the particulars
disclosed herein, rather, the present invention extends to all
functionally equivalent structures, methods and uses, such as are
within the scope of the appended claims.
* * * * *