U.S. patent application number 10/973698 was filed with the patent office on 2006-04-27 for system and method for acquisition and storage of presentations.
This patent application is currently assigned to Fuji Xerox Co., Ltd.. Invention is credited to John E. Adcock, Laurent Denoue, David M. Hilbert, Jonathan J. Trevor.
Application Number | 20060090123 10/973698 |
Document ID | / |
Family ID | 36207385 |
Filed Date | 2006-04-27 |
United States Patent
Application |
20060090123 |
Kind Code |
A1 |
Denoue; Laurent ; et
al. |
April 27, 2006 |
System and method for acquisition and storage of presentations
Abstract
Embodiments of the present invention enable the extraction,
classification, storage, and supplementation of presentation video.
A media system receives a video signal carrying presentation video.
The media system processes the video signal and generates images
for slides of the presentation. The media system then extracts text
from the images and uses the text and other characteristics to
classify the images and store them in a database. Additionally, the
system enables viewers of the presentation to provide feedback on
the presentation, which can be used to supplement the
presentation.
Inventors: |
Denoue; Laurent; (Palo Alto,
CA) ; Trevor; Jonathan J.; (Santa Clara, CA) ;
Hilbert; David M.; (Palo Alto, CA) ; Adcock; John
E.; (Menlo Park, CA) |
Correspondence
Address: |
FLIESLER MEYER, LLP
FOUR EMBARCADERO CENTER
SUITE 400
SAN FRANCISCO
CA
94111
US
|
Assignee: |
Fuji Xerox Co., Ltd.
Tokyo
JP
|
Family ID: |
36207385 |
Appl. No.: |
10/973698 |
Filed: |
October 26, 2004 |
Current U.S.
Class: |
715/202 ;
715/230; 715/255 |
Current CPC
Class: |
G06F 40/169 20200101;
G06F 16/50 20190101; G06F 16/44 20190101; G06F 16/40 20190101; G06F
16/447 20190101; G06F 16/71 20190101; Y10S 707/914 20130101; G06F
16/70 20190101; G06F 16/93 20190101; G06F 16/24573 20190101; G06F
16/4393 20190101 |
Class at
Publication: |
715/500.1 |
International
Class: |
G06F 17/24 20060101
G06F017/24 |
Claims
1. A method for capturing video presentations, the method
comprising: collecting a signal comprising video information, the
signal associated with a presentation; generating at least one
image from the signal; determining one or more categorization
criteria for the image; and storing the image in association with
the categorization criteria.
2. The method of claim 1, wherein the image comprises a slide in
the presentation.
3. The method of claim 1, wherein the categorization criteria
comprise text within the image.
4. The method of claim 1, wherein the categorization criteria
comprise visual characteristics of the image.
5. The method of claim 1, wherein the categorization criteria
comprise a time in which the presentation was shown.
6. The method of claim 1, wherein the categorization criteria
comprise a meeting in which the presentation was shown.
7. The method of claim 1, further comprising: receiving a search
query; and returning presentation content according to a similarity
between the search query and categorization criteria for the
presentation content.
8. The method of claim 1, wherein the image is stored in JPEG
format.
9. The method of claim 1, further comprising transmitting the image
to a display.
10. The method of claim 1, wherein the image is stored in a
Structured Query Language database.
11. The method of claim 1, further comprising compressing the
image.
12. The method of claim 1, further comprising modifying the
image.
13. The method of claim 1, further comprising: transmitting the
image to a viewer of the presentation; and accepting an annotation
for the image.
14. The method of claim 13, further comprising storing the
annotation in association with the image.
15. The method of claim 1, wherein the categorization information
comprises metadata.
16. The method of claim 1, further comprising: capturing an audio
signal; and storing the audio signal in association with
categorization criteria of an image captured at approximately the
time in which the audio signal was captured.
17. The method of claim 1, further comprising: capturing an audio
signal; extracting features of the audio signal; and storing the
audio signal in association with the extracted features.
18. The method of claim 1, wherein generating an image from the
signal comprises determining a content type for the signal.
19. The method of claim 1, further comprising: accepting an overlay
for the image; and storing the overlay in association with the
image.
20. A machine readable medium having instructions stored thereon
that when executed by a processor cause a system to: collect a
signal comprising video information, the signal associated with a
presentation; generate at least one image from the signal;
determine one or more categorization criteria for the image; and
store the image in association with the categorization
criteria.
21. The machine readable medium of claim 20, wherein the image
comprises a slide in the presentation.
22. The machine readable medium of claim 20, wherein the
categorization criteria comprise text within the image.
23. The machine readable medium of claim 20, wherein the
categorization criteria comprise visual characteristics of the
image.
24. The machine readable medium of claim 20, wherein the
categorization criteria comprise a time in which the presentation
was shown.
25. The machine readable medium of claim 20, further comprising
instructions that when executed by a processor cause the system to:
receive a search query; and return presentation content according
to a similarity between the search query and categorization
criteria for the presentation content.
26. The machine readable medium of claim 20, wherein the image is
stored in JPEG format.
27. The machine readable medium of claim 20, wherein the image is
stored in a Structured Query Language database.
28. The machine readable medium of claim 20, further comprising
instructions that when executed by a processor cause the system to:
transmit the image to a viewer of the presentation; and accept an
annotation for the image.
29. The machine readable medium of claim 28, further comprising
instructions that when executed by a processor cause the system to
store the annotation in association with the image.
30. The machine readable medium of claim 20, wherein the
categorization information comprises metadata.
31. The machine readable medium of claim 20, further comprising
instructions that when executed by the processor cause the system
to: capture an audio signal; and store the audio signal in
association with categorization criteria of an image captured at
approximately the time in which the audio signal was captured.
32. The machine readable medium of claim 20, further comprising
instructions that when executed by the processor cause the system
to: accept an overlay for the image; and store the overlay in
association with the image.
33. The machine readable medium of claim 20, wherein the
instructions for generating an image from the signal comprise
instructions for determining a content type for the signal.
34. The machine readable medium of claim 20, wherein the
categorization criteria comprise a meeting in which a presentation
was shown.
35. The machine readable medium of claim 20, further comprising
instructions that when executed by the processor cause the system
to transmit the image to a display.
36. The machine readable medium of claim 20, further comprising
instructions that when executed by the processor cause the system
to modify the image.
37. The machine readable medium of claim 20, further comprising
instructions that when executed by the processor cause the system
to: capture an audio signal; extract features of the audio signal;
and store the audio signal in association with the extracted
features.
38. A system for storing video presentations, the system
comprising: a database for storing images; an image capture module
configured to convert a data signal associated with a presentation
into at least one image; an update module configured to: determine
one or more categorization criteria for the image; and store the
image in the database in association with the categorization
criteria.
39. The system of claim 38, wherein the image comprises a slide in
the presentation.
40. The system of claim 38, wherein the categorization criteria
comprise text within the image.
41. The system of claim 38, wherein the categorization criteria
comprise visual characteristics of the image.
42. The system of claim 38, wherein the categorization criteria
comprise a time in which the presentation was shown.
43. The system of claim 38, wherein the update module is further
configured to: receive a search query; and return presentation
content according to a similarity between the search query and
categorization criteria for the presentation content.
44. The system of claim 38, wherein the image is stored in JPEG
format.
45. The system of claim 38, wherein the image is stored in a
Structured Query Language database.
46. The system of claim 38, wherein the update module is further
configured to: transmit the image to a viewer of the presentation;
and accept an annotation for the image.
47. The system of claim 46, wherein the update module is further
configured to store the annotation in association with the
image.
48. The system of claim 38, wherein the categorization information
comprises metadata.
49. The system of claim 38, wherein the image capture module is
further configured to: capture an audio signal; and store the audio
signal in association with categorization criteria of an image
captured at approximately the time in which the audio signal was
captured.
50. The system of claim 38, wherein the update module is further
configured to: accept an overlay for the image; and store the
overlay in association with the image.
51. The system of claim 38, wherein image capture module, when
converting the signal, determines a content type for the
signal.
52. The system of claim 38, wherein the categorization criteria
comprise a meeting in which a presentation was shown.
53. The system of claim 38, wherein the image capture module is
further configured to transmit the image to a display.
54. The system of claim 38, wherein the update module is further
configured to modify the image.
55. The system of claim 38, wherein the image capture module is
further configured to: capture an audio signal; extract features of
the audio signal; and store the audio signal in association with
the extracted features.
56. A method for capturing video presentations, the method
comprising: collecting a signal comprising video information, the
signal associated with a presentation; determining a media type
from the signal; responsive to determining the media type,
generating an image from the signal; extracting one or more items
of metadata from the image; and storing the image in association
with the metadata.
57. The method of claim 56, wherein extracting the one or more
items of metadata comprises extracting text from the image.
58. The method of claim 56, wherein determining the media type
comprises determining that the media type comprises a slide.
59. The method of claim 56, wherein determining the media type
comprises determining that the media type comprises a video
stream.
60. A method for capturing video presentations, the method
comprising: collecting a signal comprising video information, the
signal associated with a presentation; determining whether a
presentation element in the signal is static or dynamic; generating
an image from the presentation element when the presentation
element is static; generating a video clip from the presentation
element when the presentation element is dynamic; and determining
one or more characteristics associated with the presentation
element.
61. The method of claim 60, further comprising storing the video
clip in association with the characteristics when the presentation
element is dynamic.
62. The method of claim 60, further comprising storing the image in
association with the characteristics when the presentation element
is static.
63. The method of claim 60, wherein the image corresponds to a
slide in the presentation.
64. The method of claim 60, wherein the video clip corresponds to
video shown within the presentation.
65. The method of claim 60, wherein the video clip corresponds to
interactions with a software application shown within the
presentation.
66. The method of claim 60, wherein the characteristics comprise
text within the image or video clip.
67. The method of claim 60, wherein the characteristics comprise
visual characteristics of the image or video clip.
68. The method of claim 60, wherein the characteristics comprise a
time in which the image or video clip was shown.
69. The method of claim 60, wherein the characteristics are
determined after the image or video clip is stored.
70. The method of claim 60, further comprising: receiving a search
query for presentation content; and returning the presentation
content according to a similarity between the search query and
characteristics of the presentation element.
71. The method of claim 60, further comprising: generating a
classification for the presentation element according to whether
the presentation element is static or dynamic.
72. The method of claim 71, further comprising: receiving a search
query for presentation content; and returning the presentation
content according to a similarity between the search query and the
classification of the presentation content.
73. The method of claim 60, wherein the image is stored in JPEG
format.
74. The method of claim 60, wherein the video clip is stored in
MPEG format.
75. The method of claim 60, wherein the image and video clip is
stored in a Structured Query Language database.
76. The method of claim 60, further comprising transmitting the
signal for display.
77. The method of claim 60, further comprising: transmitting the
image or video clip for display.
78. The method of claim 60, further comprising: accepting
supplemental information for the presentation element; and
transmitting a signal including the presentation element and the
supplemental information.
79. The method of claim 78, further comprising storing the
supplemental information in association with the image or video
clip.
80. The method of claim 60, further comprising: capturing an audio
signal associated with the presentation element; and storing the
audio signal.
81. The method of claim 80, further comprising: receiving a search
query for a presentation element; returning an image or video clip
according to a similarity between the search query and
characteristics of the image or video clip; and returning an audio
signal associated with the image or video clip.
82. The method of claim 60, wherein the characteristics comprise
text within the image or video clip.
83. The method of claim 60, wherein the characteristics comprise
visual characteristics of the image or video clip.
84. The method of claim 60, wherein the characteristics comprise a
time in which the presentation was shown.
85. The method of claim 60, further comprising: reducing a size of
the presentation element; and transmitting the presentation
element.
86. The method of claim 60, further comprising: accepting
supplemental information for the presentation element; retrieving a
search request for which the presentation element is returned; and
returning the supplemental information with the presentation
element.
87. The method of claim 60, further comprising: accepting
supplemental information for the presentation element; receiving a
search query; and returning the supplemental information in
response to a similarity between the search query and the
supplemental information.
88. The method of claim 60, wherein collecting a signal comprising
video information comprises: determining that the signal contains
presentation elements; and in response to a determination that the
signal contains presentation elements, initiating the step of
determining whether a presentation element is static or
dynamic.
89. The method of claim 60, further comprising: determining that
the signal no longer contains presentation elements; and in
response to a determination that the signal no longer contains
presentation elements, halting the step of determining whether a
presentation element is static or dynamic.
Description
BACKGROUND OF THE INVENTION
[0001] 1. Field of the Invention
[0002] The present invention relates generally to processing and
storing images. More particularly it relates to extracting
information from video presentations and storing the video
presentations for later use.
[0003] 2. Description of the Related Art
[0004] In modern business environments a greater emphasis has been
placed on the transfer and exchange of information. During this
time, slide-based presentations using computer presentation
software such as Microsoft Power Point, web based presentations,
and video presentations, have become a staple of modern business
environments. However, such presentation software, while often
superficially useful for presenting information to others,
possesses a number of severe limitations.
[0005] Firstly, the media (e.g slides, video, audio) used in the
presentation are seldom stored in a format that is easily
searchable or accessible. Thus, it is often difficult for
presenters and recipients of these presentations to search the
content. This limitation is especially troublesome as these
presentations may be the only broadly accessible documents through
which certain types of gathered information are available.
Additionally, such presentation software is usually unable to
solicit input from the viewers of the presentation, limiting the
presentation to a passive experience.
[0006] Attempts to address these problems have usually centered
around additions or modifications to the presentation software.
However, such modifications must be performed on a per-application
basis, and in the case of soliciting input, usually require
configuration on the systems of the viewers.
[0007] What is needed is an improved system for storing,
organizing, and modifying presentations.
SUMMARY OF THE INVENTION
[0008] Embodiments of the present invention enable the extraction,
classification, storage, and supplementation of presentation video.
A media system receives a signal carrying presentation video. The
media system processes the signal and generates images for slides
of the presentation. The media system then extracts text from the
images and uses the text and other characteristics to classify the
images and store them in a database. Additionally, the system
enables viewers of the presentation to provide feedback on the
presentation, which can be used to supplement the presentation.
BRIEF DESCRIPTION OF THE DRAWINGS
[0009] Preferred embodiments of the present invention will be
described in detail based on the following figures, wherein:
[0010] FIG. 1 is a block diagram illustrating one embodiment of
interaction among a computer system, a media system, and a display
device;
[0011] FIG. 2 is a block diagram illustrating an alternate
embodiment of interaction among a computer system, a media system,
and a display device;
[0012] FIG. 3 is a block diagram illustrating a closer view of a
media system in accordance with one embodiment of the present
invention;
[0013] FIG. 4 is a block diagram illustrating one embodiment of
categorization information for stored video;
[0014] FIG. 5 is a flow chart illustrating a process for handling
presentation video input from a computer system;
[0015] FIG. 6 is a flow chart illustrating a process for
categorizing and storing video input;
[0016] FIG. 7 is a flow chart illustrating a process for
supplementing presentations with user input;
[0017] FIG. 8 is a flow chart illustrating a process for utilizing
stored content in new presentations.
DETAILED DESCRIPTION OF THE INVENTION
[0018] Embodiments of the present invention enable the extraction,
classification, storage, and supplementation of presentation video.
A media system receives a signal carrying presentation video. The
media system processes the signal and generates images for slides
of the presentation. The media system then extracts text from the
images and uses the text and other characteristics to classify the
images and store them in a database.
[0019] The present system automates the process of detecting,
capturing, interpreting, and storing presentations. The system can
detect when a presentation is beginning and initiate a process that
detects whether content is static or dynamic and stores and
classifies it accordingly. The system can also modify the content
for ease of organization and distribution, distribute the content
to viewers in an original or modified format, and end operations
when a presentation is no longer detected. The steps above can be
performed without any direct user commands to start and stop
operations or any user sorting/separation/organization of the
media.
[0020] FIG. 1 is a block diagram illustrating one embodiment of
interaction among a computer system, a media system, and a display
device. A computer system 105, such as a laptop computer, desktop
computer, tablet system, or any other type of computer, is
connected to a video splitter 110. The computer system 105
transmits an output video signal to the splitter 110, which splits
the video signal and outputs it to the media system 115 and the
display device 120.The video signal can be digital or analog and
can comprise any number of signal formats. The video signal can
also be a data signal containing video information, such as a
Virtual Network Computing (VNC) signal. The splitter can also
perform conversion of a data signal to a video signal.
[0021] The display device 120 is a device used to display the video
output to viewers of the presentation. The display device can be a
Liquid Crystal Display (LCD) projector, analog projector, a Cathode
Ray Tube(CRT) display, an LCD display or any other type of
display.
[0022] The media system 115 receives the video output from the
splitter 1 10, uses it to generate audio and video media for the
presentation, and extract relevant information from the media. In
some embodiments, the media system 115 is a conventional computer
using specialized software, in alternate embodiments, the media
system 115 is a computer specially configured to function as a
media system. In some embodiments, the media system is also
configured to collect audio through a microphone or other input.
The audio can be stored in association with the presentation images
and video.
[0023] FIG. 2 is a block diagram illustrating an alternate
embodiment of interaction among a computer system, a media system,
and a display device. In the present embodiment, the media system
115 sits between the computer system and the display device 120. In
this embodiment, the media system processes the video signal,
generates slide images and displays the generated slide images on
the display device 120. The media system 115 may also accept image
overlays and supplements, or other modifications, and output them
to the display device 120. Alternately, the overlays can be
generated by an automatic agent such as a translator program that
automatically translates the text of the presentation. The media
system 115 can also include a "pass-through" mode where the input
video signal is passed directly, without modification, to the
output device 120.
[0024] FIG. 3 is a block diagram illustrating a closer view of a
media system 115 in accordance with one embodiment of the present
invention. The media system 115 includes a video capture module
305, an image sampling module 310, an image converter 315, an
update module 320, a text extraction module 325, a database 330,
and an input/output module 335. These components may be implemented
through any combination of hardware, software, and firmware.
[0025] The video capture module 305 receives the video signal from
the splitter 110 or computer system 105. The image sampling module
310 generates slide images from the video captured by the video
capture module. In one embodiment, the image sampling module
detects if a particular image has been broadcast steadily for a
predetermined amount of time and treats it as a single slide.
Alternately, continuous video is recorded in full. If the sampling
module 310 determines that the image is a slide it generates a
bitmap for the image. If it determines that the media is video, a
video recording of either the whole capture or a segment of the
window that contains video, is captured.
[0026] The image converter 315 may optionally convert the bitmap to
a more size efficient format such as JPEG or another format. An
update module 320 is configured to generate categorization
information for media and to store the media, with the
categorization information, in the database 330. In some
embodiments, the update module 320 first utilizes the text
extraction module 325, which detects text in the image and provides
the text to the update module.
[0027] The categorization information can include date/time
information for the presentation, an identifier for the particular
presentation being shown, characteristics of the image,
supplemental information received from either the presenter or the
viewers, and text within the image. Some categorization information
is generated after the presentation has been recorded while some
categorization information is generated in real time.
[0028] The input/output module 335 is used to generate an interface
for configuring the media system 115. The interface can be a
console interface on the media system 115 itself, graphical user
interface that is accessed through input/output devices such as a
keyboard and monitor that are connected to the media system, or a
web interface that is accessed over a network. The input/output
module 335 can also be used to transmit overlays and video
supplements to the media system 115, which uses the overlays to
modify the image. In one embodiment, the input/output module
comprises a web server running on the media system 115. By viewing
an interface page on the web server, viewers of the presentation
can submit questions and comments as overlays for the presentation.
The web server can also be used as an interface for submitting
search queries for images stored in the database 330.
[0029] FIG. 4 is a block diagram illustrating one embodiment of
categorization information 400 for stored media. The categorization
information 400 includes metadata 402 and classification
information 418. The metadata 402 is preferably stored in
association with the media and is generated when the media is first
captured. The classification information 418 can be stored in
association with the media or centrally. The classification
information 418 is often generated after a presentation rather than
in real-time. The metadata includes content information 405. The
content information indicates whether the stored video comprises a
single slide image, a video clip containing continuous video,
audio, or some other type of media.
[0030] The metadata additionally includes text information 410. The
text information 410 includes text that has been extracted from the
slide image by the text extraction module 325. The information can
include all of the text or particular key words that were
designated as representative words for searches. The text
information 410 can include weights or other information indicating
the importance of particular text in the slides. For example, the
text extraction module 325 can be programmed to recognize title
text or section headings and give that text greater importance in
classifying the slide image.
[0031] The metadata additionally includes video characteristics
415. The video characteristics include image characteristics that
are extracted from the slide image. These can include colors or
distinctive shapes or other image qualities. The metadata
additionally includes supplemented information 425. The
supplemented information includes overlays and other information
that is provided by a presenter, automatic agent, or the audience
during a presentation.
[0032] The classification information 418 can include an identifier
for the presentation from which the image is extracted. It may also
include time and date information for the presentation. For
example, all of the video or slides for a single presentation would
include the same identifier within the classification information
418. Presentation data can also be grouped by meeting or day with
all of the presentation data for a single meeting or day classified
associatively. Artificial categorizations that associate
presentations that are related in other ways can also be added.
[0033] The categorization information 400 can be used by an
associated search utility to retrieve presentation content in
response to submitted search requests. Users of the search utility
can search according to content or organizational data (i.e. when a
presentation was shown, content shown at a meeting or presentation)
and the search utility will return media, complete presentations,
or sections of presentations matching the search request.
[0034] FIG. 5 is a flow chart illustrating a process for handling
presentation video input from a computer system. In step 505, the
media system 115 accepts presentation video and/or audio,
preferably through the video capture module 305. In some
embodiments, the system can detect when a presentation has begun by
analyzing an incoming video stream and detecting characteristics
indicative of a presentation. This process can also be used to stop
recording when the detected video characteristics indicate that a
presentation is no longer being transmitted. In step 510, the media
system extracts the presentation information. This step includes
the determination of what type of media is being presented, the
extraction of slide images or video streams from the video, the
conversion of the slide images to JPEGs, and the extraction of text
from the image. This step is described in greater detail with
respect to FIG. 6. This step may also include the extraction of
video streams and audio streams. This step can also include
analysis of audio content, for changes in volume, detection of
words through speech to text extraction, and any other useful or
relevant characteristics of the audio. Audio content can be
classified according to characteristics of the audio,
characteristics of video detected at the same time, or both.
[0035] The system can use a variety of methods for categorizing the
input received from the video signal and categorizing it
accordingly. Usually the system will analyze a predetermined number
of consecutive frames and categorize it appropriately. In one
embodiment, the system detects a slide or other stable presentation
by detecting unchanging video frames for more than a predetermined
amount of time.
[0036] Video can be detected in a similar manner. In one
embodiment, the system computes the difference between a series of
consecutive frames. The system checks for a region in the series of
frames in which the frames are always changing (the difference
between successive frames is not null). If it finds a region that
changes continually it determines that a video clip is playing. In
some embodiments, the system can crop the sections of the frames
that are not changing. In alternate embodiments, the entire frame
is cropped.
[0037] In step 515, the media, which can include video, slides, or
audio, is stored in association with the presentation information
of FIG. 4. In step 520 the presentation information is supplemented
with overlays. These overlays can be received from the presenter,
an automatic agent, or the audience through the web server
generated by the input/output module. In one embodiment, the
presenter can accept questions from audience members through a
network interface. The questions can be overlayed on the slide
image. In step 520, the supplemented image is output to the display
device 120.
[0038] FIG. 6 is a flow chart illustrating a process for
categorizing and storing video input. In step 605 the system
accepts media input. In one embodiment, the media input is received
through the video capture module 305. In step 610, the image
sampling module 310 extracts content from the video stream. In step
612, the image sampling module 612 determines a type for the
content. For example, video clips can be identified if a section of
the image changes continuously and stored as continuous segments.
In one embodiment, the image sampling module 310 checks for images
that are displayed continuously for a predetermined amount of time,
designates those images as static images, and generates bitmaps for
the images. The system can apply other criteria as well. In one
embodiment, the font size of any text in the image is used, with
larger text indicating a greater likelihood that the image is a
slide. During this step the image sampling module can also extract
audio from the media stream, to be stored in association with video
or images captured concurrently.
[0039] In step 615 the image converter 315 converts the content to
more compact format, such as GIF or JPEG for images, or MPEG for
video. This step is optional, and in some embodiments, the image is
stored in an uncompressed form. In step 620, the update module 320
generates a new entry in the database 330. The entry is created
with initial categorization information such as the content type
405 for the media and video characteristics 415.
[0040] In step 625, the update module 320 utilizes the text
extraction module 325 to extract text from the image or video. The
text can include weights or other information indicating the
importance of particular text in the slides. For example, the text
extraction module 325 can be programmed to recognize title text or
section headings and give that text greater importance in
classifying the content. In step 630, the content is stored in the
database 330. This step also entails adding the extracted text and
any other supplemental information.
[0041] FIG. 7 is a flow chart illustrating a process for
supplementing presentations with user input. In step 705, the video
signal is received by the video capture module 305. In step 710,
the presentation video is transmitted to the viewers of the
presentation. In some embodiments, the media system 115 transmits
special presentation display information over a network connection,
which is received by the viewers at their terminals or computers,
and is processed and displayed by an application on the recipients'
computers. In step 720 the media system, through the input/output
module 335 accepts annotations from either the viewers or the
presenter. The annotations can be comments or supplemental overlays
(drawings added to the slides through a mouse or writing tool).
Alternately, the annotations can be questions or comments
transmitted from the viewers. In some embodiments, the questions or
comments are displayed in a preset section of the image.
[0042] In step 725 the annotated presentation is displayed. In some
embodiments, the annotations are displayed in real time. In
alternate embodiments, the annotations are collected during the
presentation and displayed when the presenter returns to an earlier
stage of the presentation.
[0043] In step 730, the slide image is stored in the database 330
with the annotations stored in the supplemented information
425.
[0044] FIG. 8 is a flow chart illustrating a process for utilizing
stored content in new presentations. In step 805 a user starts
creation of a new slide presentation. In some embodiments, this
presentation is generated on the computer system 105. An
application module on the computer system, either as part of the
presentation generation program, or independently, is configured to
detect the creation of a new presentation. The application module
is configured to access the database 330 on the media server 115.
In step 810, the application module, according to the text input in
the presentation determines search terms for the presentation. In
step 815, using the search terms, the application searches the
database 330 for related content, cross-referencing the search
terms with the identification information described in FIG. 4. In
step 820, the system provides images matching the search term and
prompts a user to include them.
[0045] In step 825, responsive to user acceptance, the search terms
are included in the presentation.
[0046] Other features, aspects and objects of the invention can be
obtained from a review of the figures and the claims. It is to be
understood that other embodiments of the invention can be developed
and fall within the spirit and scope of the invention and
claims.
[0047] The foregoing description of preferred embodiments of the
present invention has been provided for the purposes of
illustration and description. It is not intended to be exhaustive
or to limit the invention to the precise forms disclosed.
Obviously, many modifications and variations will be apparent to
the practitioner skilled in the art. The embodiments were chosen
and described in order to best explain the principles of the
invention and its practical application, thereby enabling others
skilled in the art to understand the invention for various
embodiments and with various modifications that are suited to the
particular use contemplated. It is intended that the scope of the
invention be defined by the following claims and their
equivalence.
[0048] In addition to an embodiment consisting of specifically
designed integrated circuits or other electronics, the present
invention may be conveniently implemented using a conventional
general purpose or a specialized digital computer or microprocessor
programmed according to the teachings of the present disclosure, as
will be apparent to those skilled in the computer art.
[0049] Appropriate software coding can readily be prepared by
skilled programmers based on the teachings of the present
disclosure, as will be apparent to those skilled in the software
art. The invention may also be implemented by the preparation of
application specific integrated circuits or by interconnecting an
appropriate network of conventional component circuits, as will be
readily apparent to those skilled in the art.
[0050] The present invention includes a computer program product
which is a storage medium (media) having instructions stored
thereon/in which can be used to program a computer to perform any
of the processes of the present invention. The storage medium can
include, but is not limited to, any type of disk including floppy
disks, optical discs, DVD, CD-ROMs, microdrive, and magneto-optical
disks, ROMs, RAMs, EPROMs, EEPROMs, DRAMs, VRAMs, flash memory
devices, magnetic or optical cards, nanosystems (including
molecular memory ICs), or any type of media or device suitable for
storing instructions and/or data.
[0051] Stored on any one of the computer readable medium (media),
the present invention includes software for controlling both the
hardware of the general purpose/specialized computer or
microprocessor, and for enabling the computer or microprocessor to
interact with a human user or other mechanism utilizing the results
of the present invention. Such software may include, but is not
limited to, device drivers, operating systems, and user
applications.
[0052] Included in the programming (software) of the
general/specialized computer or microprocessor are software modules
for implementing the teachings of the present invention.
* * * * *