U.S. patent application number 12/716231 was filed with the patent office on 2010-09-02 for software-based method for assisted video creation.
Invention is credited to John Nicholas Dukellis, Kaz Hashimoto.
Application Number | 20100223128 12/716231 |
Document ID | / |
Family ID | 42666893 |
Filed Date | 2010-09-02 |
United States Patent
Application |
20100223128 |
Kind Code |
A1 |
Dukellis; John Nicholas ; et
al. |
September 2, 2010 |
Software-based Method for Assisted Video Creation
Abstract
The invention allows for a first set of persons to create and
modify templates that contain visual effects, synchronization
information, and possibly director assistance. These templates are
utilized by a second set of persons to generate personalized motion
photo videos from photographs, video segments, personal narratives
or animation. Motion photo videos are a novel because they very
quickly and inexpensively allow (1) persons to create and modify
templates which are synchronized to music and may be easily
populated with content by others, and (2) persons to select a song
and associated pre-made template and create a high-quality
synchronized custom video using hand-selected visual material to
populate the template with included direction and without any
required editing. Populated material in template can be further
modified by additional persons instantly generate new, modified
motion photo videos.
Inventors: |
Dukellis; John Nicholas;
(San Francisco, CA) ; Hashimoto; Kaz; (Jackson
Hole, WY) |
Correspondence
Address: |
MoodSync, Inc.
403 Main Street #415
San Francisco
CA
94105
US
|
Family ID: |
42666893 |
Appl. No.: |
12/716231 |
Filed: |
March 2, 2010 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61156871 |
Mar 2, 2009 |
|
|
|
Current U.S.
Class: |
705/14.51 ;
705/26.1; 705/310; 709/217; 715/731; 715/810 |
Current CPC
Class: |
G11B 27/28 20130101;
G06Q 30/0253 20130101; G06Q 30/0601 20130101; H04N 5/23293
20130101; H04N 5/232933 20180801; G11B 27/34 20130101; G11B 27/034
20130101; H04N 5/23245 20130101; G06Q 50/184 20130101; H04N 5/262
20130101 |
Class at
Publication: |
705/14.51 ;
705/27; 705/310; 709/217; 715/731; 715/810 |
International
Class: |
G06Q 30/00 20060101
G06Q030/00; G06Q 99/00 20060101 G06Q099/00; G06F 15/16 20060101
G06F015/16; G06F 3/048 20060101 G06F003/048; G06Q 50/00 20060101
G06Q050/00 |
Claims
1. The use of a template providing visual direction on a display to
guide a user in the selection of an existing image or sequence of
images to be placed in the template.
2. the method of 1. where the template is displayed using software
residing on a processor enabled digital device such as a personal
computer, web server, phone, or other computing device that
provides a visual representation and where the template may include
sample images and music synchronized with a given template.
3. the methods of 1 or 2 where multiple templates reside on the
device and provide the user with a choice of one or more templates
and where the templates may be related to a particular theme, song,
mood or event.
4. the methods of 1, 2 or 3 where the template includes
instructions specifying inter-image transitions such as
image-to-image fades or intra-image transitions such as pan and
zoom effects and where the transitions may be synchronized to
music.
5. the methods of 1, 2, 3 or 4 where the templates are downloaded
through a wired or wireless connection from a database of templates
that may reside on a server or are downloaded via the internet to a
client type computer.
6. the methods of 1, 2, 3, 4 or 5 where the user is assisted in the
selection of an image or sequence of images relative to subject
positioning with regard to backgrounds, landscapes or other
subjects
7. the methods of 1, 2, 3, 4, 5 or 6 where the user is assisted by
visual indicators shown on the display of the device, including the
use of shapes to dictate the placement of a subject, such as
position of the subjects eyes relative to a particular location and
where the location may include a sample image with the location
indicated.
8. the methods of 1, 2, 3, 4, 5, 6, or 7 where the template is
synchronized with music and where the music is played for the user
as part of importing images or image sequences into the template
and where the combination of music and the template aids in the
selection of an image, selection of a transition effect or some
additional modification to the image, such as a transition from
gray scale to color or transitions between multiple images taken in
a sequence.
9. the methods of 1, 2, 3, 4, 5, 6, 7, or 8 to arrange or organize
a selection of previously captured images in a palette and where
the images are selected according to criteria associated with the
mood, song or theme of a given template and where keywords or
labels associated with a set of images are used to select and
arrange them for importing into a given template.
10. the methods of 1, 2, 3, 4, 5, 6, 7, or 8 where the template
suggests the use of multiple images taken at different settings
where the settings may include depth of focus, shutter speed, ISO
level, flash intensity or synchronization or focal point or where
additional software recognizes or organizes images based on depth
of focus, shutter speed, ISO level, flash intensity or
synchronization or focal point.
11. the use of methods 1, 2, 3, 4, 5, 6, 7, 8, 9, or 10 where
software is used to graphically display a template with a sample
image and a media player is provided that synchronizes the images
transitions to music and where additional images may be provided to
show the beginning and end of the intra-image transitions such as a
pan or zoom effect and where the media player may be used to view
the inter-image transitions from sample image to sample image, such
as fade effect, style or rate.
12. the use of methods 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 or 11 to
generate a motion photo video consisting of a sequence of images
synchronized to music, stored in the form of a software object,
where the object may be a video or the set imported images with a
computer instruction set to specify the template style or
transition effects.
13. the transfer of a motion photo video in method 12 or template
object used to generate a motion photo video in method 12 from one
device to a another digital device such as a computer, phone,
television, frame, digital book, printer, disk drive or digital
media such as a DVD, CD, flash or USB based memory device through
wired or wireless communication.
14. the embedding of the motion photo video or populated template
object resulting from method in 12 in a personal or commercial
webpage, either through the direction of the user or automated
through software residing on a personal computer, web server, or
other computing device.
15. the sending of the motion photo video or populated template
object resulting from the method in 12 to another person via email
where the object is sent or a link is sent where the object can be
downloaded.
16. the use of methods 1, 2, 3, 4, 5, 6, 7, or 8 that includes the
purchase of a song or some form of a digital media license for
music, where the music is associated with or linked to a given
template or where the sending of an object or link of an object to
another person results in the purchase of a song or some form of a
digital media license for music
17. the transfer of the motion photo video object resulting from
method 12 through wired or wireless communications from one device
to another.
18. the viewing of the motion photo video object resulting from
method 12 on a display such as a television, digital frame, digital
book, phone or personal computer display.
19. a strategy for selling music that includes the provision of a
visual template synchronized to a particular song that allows
images to be imported by a user, where the synchronization is in
the form of software that specifies inter-image and intra-image
transition effects.
20. a strategy for selling digital display devices for displaying
the motion photo video object resulting from method 12 and where
the display may include electronics that communicate with other
electronic devices to receive the motion photo video object through
a wired or wireless connection.
21. a strategy for advertising products or services by displaying
or broadcasting the motion photo video resulting from method
12.
22. the use of method 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, or 12 to
create an advertisement that may be co-sold or co-branded with a
particular song or that may be related to a specific template.
23. the method or 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 11 where one
person creates the template specifying a sequence of visual effects
and a second person utilizes the template to select images to which
the visual effects are applied in order to generate a motion photo
video resulting from method 12.
24. the method of 23 where a template may be created by one person,
where the template may be modified by one or more additional
persons, and where the images are selected by a third person, where
the third person uses the modified template to select content to be
placed into the template to create a motion photo video resulting
from method 12.
25. the method of 23 or 24 where one person selects images to be
placed into a template to generate a motion video resulting from
method 12, and where a second person may replace one or more images
in the resulting template object, and where a new motion photo
video is generated from new content in template object by the
process described in method 12.
26. the use of methods 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 11 where
an update is applied to a template which causes the initial motion
photo video created by earlier versions of the populated template
object to be updated with the changes to the template object, which
may include changes to timing, inter-transition bins or their
effects, or other visual effects, where the same content used by
the person to generate the initial motion photo video in method 13,
along with the modified visual effects from the update, is used to
generate the new motion photo video.
27. the use of method 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, or 11 where
the template is generated with the aid of a a visual representation
of the music, such as a frequency transformation, energy or volume
profiles at a particular frequency, lyrics in text form, amplitude
or volume of a vocal track, which assist the user in determining
bin properties, inter-bin transitions or intra-bin transitions with
respect to the music.
28. The use of a template software object that contains one or more
bin software objects, each of which that has one or more child
objects associated with it such as an image, inter-bin transitions,
intra-bin transitions, speech annotations, background or text,
where the bins are part of a template object which schedules the
display of images with any visual effects associated with the bin
at the time prescribed by the template software object or bin
object, where the template object is used to create a motion photo
video, and where images in bins may be determined by or replaced by
users, manually or automatically, to alter how a template object is
populated and generate new motion photo videos.
29. The use of method 28 where the properties of the bin software
object, such as intra-bin transition effects or timing of the bin,
may be modified by a person so that a third person may populate,
manually or automatically, the modified template object with images
or additional audio to generate a new motion photo video.
30. An apparatus, such as an accelerometer based device that aids
the user in selecting visual effects such as intra-bin transitions
or inter-bin transitions associated with the music as part of
generating the template objects used in methods 23, 24, 25, 26, 27,
28, or 29.
31. The licensing or selling of the template object created in
methods 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 26, 27, or 28 by one
person to another person.
Description
[0001] This nonprovisional utility patent application claims the
benefit of the priority date of provisional application No.
61/156,871 filed Mar. 2, 2009.
1.0 BACKGROUND AND SUMMARY
[0002] The invention allows for a first set of persons to create
and modify templates that contain visual effects, synchronization
information, and possibly director assistance. These templates are
utilized by a second set of persons to generate personalized motion
photo videos from photographs, video segments, personal narratives
or animation. Motion photo videos are a novel because they very
quickly and inexpensively allow (1) persons to create and modify
templates which are synchronized to music and may be easily
populated with content by others, and (2) persons to select a song
and associated pre-made template and create a high-quality
synchronized custom video using hand-selected visual material to
populate the template with included direction and without any
required editing. Populated material in template can be further
modified by additional persons instantly generate new, modified
motion photo videos.
[0003] Professionally produced videos require a script of what
images to shoot or create. Camera crews then must acquire the
footage and the resulting images are processed and modified.
Artists then edit content, including when to start and end segments
and how to transition between segments in order to tightly
synchronize the visual aspects with the specific audio track.
Though results are good, the process is expensive, time-consuming,
and requires significant technical expertise to use video-editing
software. The invention described herein provides a method to
significantly reduce the time, cost, and complexity of creating a
sophisticated template containing a rich story and allowing further
modification of the template for instant creation of videos with
customized footage by unsophisticated users.
[0004] By use of the invention, users with commonly available
electronics such as PCs, cameras, and camera-phones will be able to
instantly or in real-time create videos that capture events, such
as birthdays, vacations, and sports seasons, or moods, such as
happiness of being with a friend or the feeling of missing
somebody. Users can select, and possibly purchase, a popular song
and have a ready-made template into which they can overlay their
images for an instant finished product, creating their own
customized multi-media video. Videos can also be produced without
music though we describe the rest of the invention hereafter
utilizing music for clarity of the explanation.
[0005] Users are guided through the video creation process by sets
of instructions and image sequences that are pre-defined by
template composers. Any person can be a template composer. These
instructions are invaluable to ordinary users as it allows a
coherent story to be personalized with their own visuals without
the tedium of defining start, stop, and transition images or going
through the effort of listing out the order or flow of images to
create the story. There are no existing integrated tools that
assist users in identifying which images to capture or insert and
that allow instant placement of these images in a human-created
pre-defined template with the goal of immediate production of a
finished video, customized and further modifiable by the user.
Existing computer systems require users to modify nearly all
aspects of a video or they provide a basic template lacking
synchronization defined by humans. The resulting task for the end
user is either too complicated or so limiting that the user cannot
modify critical aspects. Several automated tools allow for
selection of images by a user that are automatically placed into a
pre-defined template or a template that is not pre-defined. This
invention specifically requires users to select images for
placement into bins that are pre-specified. A bin can be considered
an object in software that corresponds to a fixed slot of time and
contains objects including an image, transition effects, text
and/or annotated speech. These bins, and associated data about when
and what occurs within the bin, allow for very tight
synchronization with the audio source, as well as valuable
instruction to the user who is determining which imagery to supply
into the bin.
[0006] There are multiple components in the creation of an MPV: (1)
template creation, (2) use of template to assist in the composition
of photos in real-time with the purpose of instant MPV creation
using existing camera technology or modified camera hardware or
software technology for the capture of images to facilitate better
MPVs, and (3) use of a template to compose an MPV with pre-existing
images with the purpose of creating an MPV. This invention focuses
on template creation (1) and use of a template to compose an MPV
with pre-existing images with the purpose of creating an MPV
(3).
1.1 BRIEF DESCRIPTION OF THE DRAWINGS
[0007] FIG. D1 shows the template creation process by a template
composer and subsequent personalization by an end user.
[0008] FIG. D2 shows an illustration of the time mesh scheduling of
template events.
[0009] FIG. D3 show a taxonomy of template objects.
[0010] FIG. D4 shows a timing mesh that synchronizes music, images,
and transitions.
[0011] FIG. D5 shows an inter-bin transition GUI based editor
[0012] FIG. D6 shows an intra-bin transition GUI based editor
[0013] FIG. D7 shows an intra-transition viewing and editing
application with multiple overlapping bins.
[0014] FIG. D8 show a separated track view mode for bins.
[0015] FIG. D9 shows an image selection tool.
[0016] FIG. D10 shows a method to create a focal point or
trajectory for the focal point during manipulation of an image
during a transition effect.
[0017] FIG. D11 shows software utilizing pre-existing templates and
images to fenerate MPVs.
[0018] FIG. D13 shows a flow for template enabled image selection
and composition.
[0019] FIG. D16 shows an image screen selector.
[0020] FIG. D19 shows a camera-based title creation and share
screen.
2.0 DETAILED DESCRIPTION
[0021] This invention describes the creation of a template and the
subsequent creation of an MPV using a template to assist in the
composition and selection of images. This invention focuses on the
use of templates with pre-existing images or where images are
provided to software utilizing a template to create an MPV. Section
2.1 describes the template creation process and the template object
that results. Section 2.2 describes the use of the pre-made
template by a user to create an MPV.
2.1 TEMPLATE CREATION
[0022] Template composers use tools that offer features to define
image starts, stops, and transitions, as well as editing features
relating to focus, movement, and other operations to be performed
on images. Template composers further edit instruction sets for end
users by providing instructions about which images should be placed
by users in a particular bin, offering wording and/or sample
symbolic images. Users can then modify these templates in limited
ways by substituting their own images, either acquiring images in
real-time based on instructions, or by replacing sample images with
already existing images selected by the user. This section begins
with a summary of the template creation process in FIG. D1 and is
then followed by detailed descriptions of several key components in
the process.
[0023] FIG. D1 summarizes a typical process undertaken by a
template composer in the creation of a template. The template
composer first chooses a song and then, while listening to the
song, cuts the timeline of the song into bins. Each bin is defined
by where a new image will begin to appear and where an image will
disappear. These bins are tied directly to the timeline of the song
and can be edited graphically, with or without an image occupying
the bin, for precision as defined below in an object called a time
mesh. An image may be defined as a single digital image or
photograph, a video object, or an animated video object. The most
common use will likely be a single digital photograph per bin and
that representation will be used in the following examples and
description. However the invention envisions other forms of image
related media being used and versions of the software will enable
additional forms of images to be used.
[0024] The time mesh maintains a schedule of template object
events. The template objects include bin objects in the form of
images or transition effects such as fades, pans and zooms. The
template objects also include audio tracks such as music in digital
form where the events may include the song beginning and end. An
embodiment of a time mesh could be a table where one column
contains the event identification tag and another column that
represents the time at which the event is executed.
[0025] Bins are not automatically determined by computer, but may
utilize computational tools to assist in the creation of bin
boundaries, such as bass patterns, drum beats, voice tracks, or
volume levels as described in FIG. D2 (107). These
characterizations of an audio signal would provide visual waveforms
in organized rows to the template composer, allowing visual
understanding of patterns in the music. Bins are more likely to
follow the phrasing of a song rather than actual beats in order to
make the transitions less routine and contain more feeling. A key
element here is that a template composer ultimately determines
where the bin boundaries occur, and can easily modify them in the
time mesh defined in FIG. D2. This first step after selection of a
song is referred to as "Cutting" and is defined in FIG. D1.
[0026] A template composer may begin the video at a different start
time than the song, for example to avoid the crowd cheering at the
beginning or to avoid a few second of blank space that exist on one
digital copy but not on another, such that the song selected would
have an accompanying fingerprint that identifies when the song
starts; this fingerprint could be compared to other copies of the
song to identify precisely where to begin the song, as well as
ensuring the songs are played at the same speed and can arrive at a
common endpoint even if one plays longer than another.
[0027] After completing cutting, the template composer performs
inter-bin transitions, also referred to as blending, between bins
to define how one bin fades in as another fades out. Blending can
be any type of entry of one or more images and/or exit of one or
more images. The template composer then creates intra-bin
transitions that describes whether a photo zooms in or out, or
changes color or undergoes another visual augmentation such as
sharpen or fade to grey. The intra-bin transition data is fine
tuned to the specific bin, such as placing a fast moving zoom or a
fade from color to black and white to match the feeling of the song
at that point. The intra-bin transition data is later fine-tuned to
the particular image that is placed into a bin so that it captures
the full artistic intent of the template composer, such as
targeting a zoom on the eyes of a person in an image. Lyrics are
then synchronized to the bins if not already included with the
song, and the template composer inserts images manually or
automatically based on lyrics or image search instructions inserted
by the template composer. These images symbolize which types or
examples of images an end user could insert to follow the storyline
of the video, as appropriate for the specific bin. The template
composer can insert additional information such as prompts, cues,
or instructions about what images to insert into the bin. Some
images inserted by the template composer can have cut-out sections
where they serve as backgrounds for parts of images provided by an
end user, such as a snowy background with a circle overlay where a
person's head appears. The background template image can be
inverted so that there are parts provided by the template composer,
such as text or other visuals, that overlay over an end user's
images. These steps can be completed in any order after the cut
occurs, or they can occur simultaneously through use of tools that
can capture movement of an individual and be decoded to cause the
actions of cut, blend, and intra-bin transitions to occur. The
template can then be packaged up and made available to a user who
can swap images and perform very limited fine tuning to personalize
the video to their liking.
[0028] 002 in FIG. D1 describes the actions of a user who has
received the template on their PC or other electronic device. They
are instructed along the series of images in the template about
which images to capture or insert at a given bin. These images
might originate in real-time from a camera, be supplied by the
template composer along with the template, come from a user's
existing image library, or originate from the Internet based on
keyword information supplied by the template composer or lyrical
data. Sliding these images into the template produces a ready-made
video. After the user selects the images for each bin, the user can
optionally add basic annotations to their liking or as described by
the template composer. Finally, the user can upload the video to
the Internet where it can be accessed later from other devices. It
can be shared in this method by selecting friends with which to
share the video or an Internet site on which it may be posted. The
video in its entirety consists of the song, images (photos, video,
or other imagery), and transition information, all synchronized to
the time mesh.
[0029] FIG. D2 describes an embodiment of the time mesh and how it
may be used to schedule the execution of template object events.
The template objects may be organized or visualized along a time
line in terms of the object type, such image tracks 101, transition
tracks 103, time mesh 105, and audio tracks, 107. The template
parent object has three main child objects, bins 150 153, time mesh
105 and music/audio objects 132 137. The time mesh object 105 may
have two child objects, the event identification 128 and event
execution time 130 objects. These two objects contain the schedule
of events for all template events and may be stored in the form of
a table. Another embodiment may use a time mesh with a single event
id that references execution time and instructions. The time mesh
may have multiple events with the same execution time when in
reality the template and MPV software will order the instructions
for the microprocessor. The result between two events executed with
milliseconds will likely give the appearance of synchronization to
the viewer, and computer processing may combine overlapping
instructions when appropriate prior to providing visual or audio
output so that the actual output is consistent with all the
instruction sets. The events could also be staggered in time, for
example in the milliseconds, for other embodiments.
[0030] The bin parent object 150 includes multiple child objects
that include an image 120, inter-bin transitions 124 such as fades,
intra-bin transitions 126 such as pans and zooms 126. A given bin
150 is defined by start 151 and end 152 events. In the example
shown in FIG. D2, bin 2 overlaps with bin 1 with a fade in 184 as
bin 1 fades out 180. The other bins 3-7 are shown here with hard
transitions and no fades so that other features can be more easily
identified. For example, the time mesh will contain an event id,
for example B1S 148 and an event time 149 corresponding to the
beginning of bin 1. The time mesh will track the beginning and end
of all bins as shown in the figure as bin1 start BS1 and end BE1,
bin 2 start B2S and end B2E and continues to bin 7 start BS7 and
end BS7. Bins may overlap as shown in bin 5 160 and bin 6 164 where
images may be superimposed, e.g. to form a sequence. It is not
shown in this figure, but one embodiment might have audio objects
that belong to a given parent bin object. One example is the a
birthday related template where the last bin has a placeholder for
the user to record their a personal birthday greeting. Since the
audio object belongs to a given bin it can only be played during
the fixed time slot defined for a given bin.
[0031] The bins will normally be sequenced with the music track but
there may be cases where a bin contains a title slide 150 or a
credits slide 170 that correspond to silence 137. The silence may
be stored as entries in the time mesh or as shown here, the absence
of a music track is treated as silence without an explicit time
mesh entry. The bin can be considered as a container of objects
including a specific image and data such as transition effects that
generally applies to an image. The bin exists without any specific
image and image data may be replaced or modified at any time. The
inter-bin transition objects 124 control the effects associated
with the transition of one image to another such as where one image
fades out 184 or fades in 180. The fade transition events are
recorded in the time mesh with the event id 181 and event execution
time 181. The inter-bin transitions will normally overlap from one
bin to another. The intra-bin transition objects control how a
given bin manipulates an image in terms of composition, such as
pans, zooms and color manipulations. One example is the use of pan
to move a viewing window from the left of the image to the right
where the time mesh contains the pan event id 191 and time 192
associated with the beginning 190 contains information regarding on
how the pan is executed in terms of speed and perimeter of the
bounding box being viewed. Another example has bins with or without
images automatically changes the color throughout the duration of
the bin beginning with one part of the color spectrum and gradually
changing to another in order to display the intra-bin effect of
zoom, pan, or rotate. This color change could be part of the
finished video or serve as a tool for creation as it captures the
energy of the transition in a visually changing form. The bins may
include objects that can and cannot be altered by additional
contributors or template users. An example is a non-modified bin at
that beginning that adds a corporate branding image or icon with
either silence or a short audio segment.
[0032] The music object 132 exists in digital form as a music file
or a digital signature. A digital signature is a digital
transformation, such as a wavelet transform, that identifies unique
characteristics of a song related to synchronization with bin
objects. The digital signature may be used to synchronize a
downloaded version of the song to the time mesh by using pattern
matching between the templated stored signature and a
transformation of the downloaded song. The matching may be as
simple as locating the beginning of the downloaded song and meshing
it to the song begin event 133. The matching may include
convolution of the two signals to find the best match. The matching
may include stretching and scaling the two signatures to address
situations where the template creation may be done with a different
encoding of the song than the song that may be downloaded at a
later date. The matching may also include cutting portions of a
song, such as the beginning or end, to make the song used by the
user consistent with the song used by the template composer, or
merging bins or cutting bins out of the MPV to be consistent with
the song provided by the user.
[0033] While the template architecture doesn't require an object
oriented software architecture for implementation, a preferred
embodiment uses it to allow for an extensible and flexible
architecture. This architecture will allow modifications to be made
and existing templates to be easily updated without breaking the
existing template. For example, if a creator updates their images
to use a new inter-bin transition effect the changes can be rolled
out across the current template and all modifications thereof. It
also allows for reuse of existing template objects for example a
company may create several templates based on the advertising
campaign with a template for specific products within a given line.
In this embodiment, elements of the template for example opening
and closing branding, campaign messages and transition effects can
be consistent with the images and songs changing. It is also
preferred that permissions on each object be maintained so that the
template creator can control which objects can and cannot be
modified by subsequent template creation contributors or by a
template user.
[0034] A taxonomy of template objects is provided as an example in
FIG. D3. In this example the parent template objects has three
child objects: bin, time mesh and song. Each of these objects has
child objects. For example, each bin object may have an id, image,
transition effects and text associated with it. The inter-bin and
intra-bin objects have transition effects as described as part of
the discussion associated with FIG. D2.
[0035] Importantly, embodiments may contain instructions which are
references to libraries or instruction sets. For example, a
template object such as an inter-bin transition may contain a
reference call to a particular type of fade. That fade operation
may be pre-defined in a library on a particular computing device or
camera, or may be able to reference a library or set of
instructions online which provide functionality or further
reference to where functionality may be attained. The template
utilizes reference call made by the fade to the instructions
defined in the referenced software or hardware. This structure
allows templates to take advantage of pre-existing routines on
specific devices. Further, this structure allows updates relating
to templates or specific hardware to be made online and referenced
by existing templates. A template may contain a reference to
another place which contains a reference to yet another place, and
this process may contain any number of further references until the
actual machine instructions to perform the action are
retrieved.
[0036] Another embodiment maintains a list of contributors where a
given template parent object may have additional template children
associated with a contributor's modification. In this scenario, the
template database may maintain an `attribution` list where each
contributor is recognized and automatically added to previous
contributors in that inheritance chain. The list may include links
to Internet-accessible sites that contain further updates, such as
for insertion of new material, or additional referencing, such as
modifications that occurred after the creation of the MPV being
viewed. Another embodiment allows for the attribution list to share
in any commercial proceeds or licensing event.
[0037] FIG. D4 shows elements of the timing mesh in what is the
resulting product of a template composer synchronizing music with
images and transitions. It locks items independently to specific
points in the song that are determined by the template composer and
subsequent modification of parts generally occur independent of
each other so that they may stay synchronized to the audio. 260 is
a timeline of the song, 270 is a waveform characterizing the audio
elements of the song. 250 is a slider to show which part of the
song we can see in the box (the grey area indicates the part we
see, the white is the entire video/song length). 200 indicates the
lyrics that are synched to 220, the filmstrip of photos in order of
when they appear. 230 is the previous image, 240 the transition to
the next, and 250 the current image being shown. The dashed box
around 240 and 250 is the element that is editable as described
below. It can also extend to cover the preceding and following
transition, or can encompass an image, an intermediate transition,
and the following image, such that the encompassed area is editable
as described below. 210, the checkboxes, allow certain elements to
be displayed so that the composer can view or hide whichever of the
aforementioned items are desired as such. They can be dragged up
and down to reorder the elements described. 296 allows the
filmstrip timeline to be increased or decreased in size, showing
correspondingly less or more of the series and allowing easier
editing. Template composers can drag sliders to navigate or use
buttons as shown. Additionally, they may drag sides of the pictures
or transitions to change where they occur in the timeline. The most
important aspect of the timing mesh is that all elements here are
synched to the timeline and not to each other, such that dragging
an image left will change that image, but not the audio or any
previous or subsequent images. They are instead locked into the
timing mesh independently. The structure of the timing mesh also
allows segments of an MPV to be cut and for the contained elements
to be reindexed to a time mesh that starts at that segment time, as
might occur when combining portions of two MPVs.
[0038] Template creation is facilitated by a graphical user
interface (GUI) that provides visual aids to synchronize image
selection and transitions to specific songs. Image transition
selections can be described as two classes.
[0039] Inter-bin transitions describe visual effects associated
with the switch from one image, video image sequence or animation
to another one image, video image sequence or animation. An
inter-bin transition may describe the type and rate of fade from
one image to another. The inter-bin transition may be created or
described through a graphical user interface (GUI) or could be
described in an existing or new programming language.
[0040] An example of how a GUI might facilitate intra-bin
transition is shown in FIG. D6. In this example the intra-bin
editor 600 allows for a suggestion box that offers hints, with
option for more explanation and image suggestions box. This can be
hidden by clicking on the x in the box. Also can the user pull up
additional hint information either by clicking on box, placing
mouse over box. The main viewing window at 610 shows current image
when paused and the changes in image (still or video) when play is
pressed. The main viewing window may be in the form of a media
player that allows the user to play, stop, pause, reverse or
forward their way through a transition sequence.
[0041] The initial framing of the current image (with effects
already in place) indicated at 620; might be zoomed in, altered, or
have other visual effects contained within. The initial frame may
allow the user to select the initial condition or state before pan
and zoom effects are applied. The user can select point on image,
indicated at 630, on which to focus, via an optional target tool.
User can click to edit or link to, end image or can remove from
tool. This also enables a point to stay static during a zoom in or
out. The ending framing of image is indicated at 640. As in 630,
user can click to edit in a number of ways to control the final
state of the pan and zoom effects.
[0042] As shown at 650 the timeline from start of image to end of
image and status indicator. The slider moves automatically with
time. User can drag slider to desired point within an image
transition or the MPV. A loop button for image transition is shown
at 660.
[0043] The zoom or pan movement in an image might occur linearly,
exponentially, logarithmically, or according to another parameter.
Oftentimes linear movements do not correspond well to a perception
of a consistent movement. Movements of any of the parameters might
automatically scale to keep perceptual consistency in mind.
[0044] As a template composer creates the template, he might
utilize a tool such as the image selector tool in FIG. D9. Images
can be manually or automatically inserted onto the palette in 910,
where each grey box represents a different picture. These images
originate from the folders, websites, or other sources described in
900. 920 represents the image from 910 currently selected for
insertion into the bin that is contained by the dashed rectangle on
the film strip (as described in FIG. D2 at 220). A template
composer simply drags an image from the palette into the film strip
at the desired location and the image now occupies the bin. The
information about transitions and timing already exists with the
bin, regardless of the data, though some images may have target
data pre-associated with them to assist with transitions, or can be
analyzed such that target data is derived automatically where a
face might be. There is a resizing button at the bottom of 910 so
that images can be resized to fit more images or less images on the
palette, as well as sliders to move throughout the palette. Other
data may be inserted onto this screen, such as hint information
just above 920 that will be supplied to the user to assist in the
user's selection of images.
[0045] The synchronization of the inter-bin and intra-bin image
transitions with music represents fixed points in time during the
duration of the song and is managed as part of the time mesh. As
indicated in FIGS. D4, D5 and D6, elements of the time mesh may be
shown visually as a time line that ties the image selection, image
transitions, music or lyrics together. Also as shown the GUI may
incorporate the use of multiple tracks to aid the user in viewing
simultaneous bin, image or song related events, as illustrated in
FIGS. D7 and D8. Also the selection of transition effects may be
aided by the use of a handheld device that a template creator can
sway like a wand or conductor's baton. This may allow the creator
to more easily translate how they process the music in their brain
to the visual effects that define one or more bins. Other devices
may include accelerometer based devices that capture motion, such
as dancing or other bodily movement and translate those motions to
visual effects during template creation.
[0046] FIG. D7 provides an embodiment of intra-transition viewing
with multiple overlapping bins. 700 is a suggestion box which
offers hints for both bins, with option for more explanation and
image suggestions box. 710 is the main viewing window, shows
current view in video that mixes multiple bins and utilizes the
current inter and intra transition effects. This view changes when
play is pressed as movie proceeds. Synched up to slider in 5 and
timeline at bottom. 720 is the initial framing of the image in bin
1 (with effects already in place); might be zoomed in, altered, or
have other visual effects contained within. 722 is the end framing
of the image. 730 is the initial framing of the image in bin 2
(with effects already in place); might be zoomed in, altered, or
have other visual effects contained within. 732 is the end framing.
740 is the timeline from start of image to end of image and status
indicator. Slider moves automatically with time. User can drag
slider to desired point. 750 is the main filmstrip showing
non-overlapping bins. 760 is the second filmstrip showing bins that
overlap with first. There can be multiple filmstrips here, where
ever bins overlap (for multiple pictures at one time that are not
simply transitioning from one to next). Transitions are shown
before and after to indicate when image starts and when it ends,
with transitions also covering inter-slide transitions of images
before and after. Dashed box refers to section shown in viewer in
710 and consistent with full time bar in 740.
[0047] Another embodiment of displaying multiple tracks for a
composer or user is described in FIG. D8.
[0048] FIG. D8 provides an example of how bins may be displayed
vertically to give the User or Composer an easy method to view how
bins proceed over time with respect to time placement of bins and
transitions. 800 shows current images occupying given bins. Bins
are listed in order from top to bottom. In User Mode, clicking on a
bin in this section brings up a screen to assist User in selecting
a picture. In Template Composer Mode, clicking on a bin allows the
Composer to edit the fields that will be visible to the user, such
as keywords, effect, image selection (for multiple images to be
recommended), etc. 810 is an up and down arrows allow user to
scroll up or down the list of tracks for editing. 820 denotes a
visually placed time segment, listed by beats or by a pre-defined
time period such as seconds. 830 is a bin, placed on the horizontal
axis at the time it is to begin being shown, and the transition
time in and out. The shaded boxes indicate transitions. The clear
boxes are where the image is full (though likely involved in an
intra-bin transition). In Composer Mode, bins can be dragged to a
different time, stretched, copied, or modified in other ways.
Clicking on a box allows the Composer to modify the bin and its
characteristics. 840 is a timeline (horizontal) of the currently
shown section; the current time being played (vertical shaded bar).
850 is a track-size slider allows the tracks to be shrunk in size
so that more tracks can appear, allowing the composer or user to
view the full set of tracks for a video or just the current set
being viewed. 860 is a timeline size slider that allows larger or
shorter periods of time to be represented by the timeline.
Composers require fine tuning of tracks and this method can act
like a microscope, or allow the full video to be viewed on one
screen.
[0049] FIG. D9 also provides an idea of how an end user might drag
their images into the filmstrip to personalize a template to their
tastes. Certain features would likely be disabled for the end user
that are accessible to the template composer, or which may be
unlocked by an end user who assumes the role of a template
composer. In the above it is assumed that the template composer and
the end user are separate individuals, though they certainly could
be the same individual. Additionally, any template composer could
begin working from a pre-existing template, whether partially or
fully complete, in order to modify the template to their liking.
Any user could modify a template, whether in original form or
already modified by another user, to suit their taste. For example,
a friend may receive a video full of picture from a birthday party
and choose to swap one picture of the birthday boy with an old
picture of the birthday boy from childhood. There would likely be
tracking data associated with changes over time and a possibly a
repository in which the old versions and the associated data are
stored. Access to different features of modification would depend
on the licensing terms entered into by the various parties. For
example it may be necessary to purchase a copy of the song or
template in order to modify a friend's video, or even to purchase
one or more copies of the song before it may be shared with others.
Some of these rules might enable access to an existing copy of a
song already residing on a friend's computer that would need to be
referenced at run-time. The fingerprinting of the song in order to
synchronize start times is of particular importance in this
instance.
[0050] Elements from FIGS. D4, D5, D6, D7, D8, and D9 may be
combined to display a set of data to allow composers and users to
view and modify aspects of an MPV most efficiently for their
specific purposes.
[0051] FIG. D10 illustrates the mathematics of a tool available to
template composers and possibly end users which allows an image to
have a point on the photo which stays static in it position on the
screen while the photo is zoomed in or out. For example, a point
between the eyes is selected as in the above illustration and while
the image zooms out, the point between the eyes stays in exactly
the same position on the screen. The target tool shown in FIG. D6
would allow these two points on the Start and End images to be
locked together and move together if the user changes which part of
the image is shown on the screen. For example, when locked, if the
first image is moved down so that more of the head shows, the
second would move in lock step to keep the point between the eyes
consistent in the before and after. The second image would
therefore be constrained in where it could move; its movement would
require the first to also move. This tool could occur
automatically, semi-automatically, or manually; the software may
suggest points on the image using algorithms to detect likely
targets, such as eyes or planets.
[0052] The next section provides an example of how a user might
utilize the templates create by template composer. The template is
used with a camera or camera equipped device to compose or capture
images in real-time.
2.2 IMAGE COMPOSITION AND SELECTION USING TEMPLATE OBJECTS
[0053] This section provides an example of how a user might utilize
the templates created by a template composer to select images. As
shown in FIG. D11, the template software or software associated
with a template may provide four types of functions as part of the
MPV creation process. The first function is that it provides
`opportunity` by allowing for one or more templates to be
downloaded or shipped with the computing device and providing
immediate access to the template at any time. The availability of
multiple templates further provides the opportunity for the user to
match a given setting or event with an appropriate template to
create a personalized MPV.
[0054] The second function is that the template or software
associated with a template will `assist` the user in the
composition of the image such as the placement of the primary
subject relative to a background or landscape or the movement of
the subject for a sequence or burst of images. The third function
is that the template or software associated with a template may
provide instructions to the user or the images regarding settings,
such as depth of focus or flash settings, for one or more image
captures. The fourth function is that the template or software
associated with a template enables the user to view existing sample
images on the digital display, view and select one or more captured
images for a given bin thus replacing a given template sample image
and finally viewing the finished MPV on the display. The template
processing unit may be in the form of a software module that
resides within memory and is executed on a digital device or it
could be a separate integrated circuit component configured for
template operations. The template software may be downloaded and
the software associated with a template may be embedded into the
digital device. The software associated with a template that may be
embedded into the device will likely interpret a given template
object to assist, capture, place images and view the final result.
In either case, the template processing unit (constructed in
software or hardware) communicates with the digital display and
various selection or modification functions presented in FIGS. D4
through D9.
[0055] As described in Section 2.1, each template is associated
with a particular song and has fixed bins for which a user will
choose images. The preferred embodiment is that the song is
available so that the user can hear the song while composing and
capturing the images and that it may provide a richer experience.
However in other cases the song may not be available for download
or included with the template and the invention includes the use of
the template without the song as well. The synchronization of the
sample images and the music is done during template creation so the
replacement of the sample images with new images for a given bin
can still occur.
[0056] The resulting MPV can be sent from the computing device to
another person's computing device. There are multiple ways to do
this. One method is for User A to email the completed MPV to
another person, Recipient B, who receives the MPV or a link to a
server where the MPV may be downloaded. In the case where the
template is associated with a commercially available song, User A
may choose to purchase the song for the Recipient B and their MPV
is downloaded with the associated song. User A may choose not to
purchase the song and Recipient B receives a URL link to download
the MPV and purchase the song. User A may choose not to purchase
the song and Recipient B receives the MPV and is prompted to
purchase the song when they try to play it. User A may choose not
to purchase the song and Recipient B receives the MPV and already
owns the song which resides on a device that is used to view the
MPV. The invention includes the various combinations of
transmitting MPV and the associated songs or the various
combinations of sending links to locations where MPVs and songs may
be downloaded.
[0057] FIG. D13 provides an embodiment of the process for an end
user operating software that utilizes a template to create an MPV.
In this case the user will select photos though other similar
embodiments would include video, animations, text, or other visual
materials. The user starts by choosing a song, mood, or theme 2000
for which to create a video. The user can view templates 2010 built
on that song, mood, or theme to identify which template will be
selected for modification. There may be multiple templates made by
a variety of template composers that are all built upon an
identical song. In 2020 the user selects the template to use,
possibly including the purchase of the song and template at this
stage. In 2030 the user views the instructions contained within the
template about characteristics of the images to select for
placement into a bin, a series of bins (most likely in the form of
a list). In 2040 the user begins to look at the first bin which
requires an image, and in 2050 instructions and lyrics about what
the image should contain are included. Additionally, while in this
bin the corresponding section of the song may play so that the user
can feel mood of the music and hear lyrics. In 2060, a user takes
one or more multiple pictures which are stored into the currently
selected bin. One of these images will occupy the bin and that
image will be selected in 2070 by the user. It may default to the
last picture taken in the bin. Certain bins may require multiple
images to be included and these could be selected from the bin. In
2080 the user proceeds to the next bin either automatically after
selecting an image or manually indicating to proceed. After the
last bin is filled, 2090 allows the user to view the full movie
with their images. After viewing, in 2100 the user has the option
to share, upload, or gift the video. In 2110 the user has the
opportunity to further edit the video on a PC or other mobile
device such as a digital camera, digital frame, mobile handset, or
other electronic device that offers a user the ability to download
the video and its parts and provide further commands for replacing
images in various bins.
[0058] There may be additional steps in FIG. D13 and the listed
steps may be skipped or performed in a different order. In one
embodiment, there would be no lyrics or written suggestions, but
there would be sample images in bins that the user would view but
replace. In another embodiment, colors are used to indicate what
types of images to place into different bins. In another
embodiment, the user points to a group of images which are placed
automatically into the bins in a random or semi-random order and
the user is able to arrange them as they desire while watching the
video with these images.
[0059] Other various optional steps are included in the dashed
boxes in the right column in 2011 through 2081. For example in 2011
a user may view multiple templates for a given song at the same
time in order to see the differences as they occur. One template
video could be on the left and a second on the right. 2021 allows
for purchase of a song or template at a given point in the process,
which could also occur in 2041 when gifting to another person. 2051
allows a user to move between bins as they desire, allowing images
to be captured out of order as the user desires. 2061 gives the
user the opportunity to view all or a segment of the video at a
given time, or could allow a given bin to loop through the various
images in the bin so the user can see how they look. 2062 allows
many images to be referenced to a bin for easy reference at a later
point in time even though they are not the image being used in the
MPV in the bin slot. The real-time reference to a bin is an
improvement over the current requirement that a user taking many
photos at a given time would have to perform the binning assignment
to a set of photos later on as they decide which image to select,
without the benefit of music and requiring the user to remember
what goes where, particularly for photos taken out of sequence of
the bins. 2062 helps organize the assignment of the images taken in
2061 and avoid confusion later. 2071 allows the user to find photos
previously taken and place them into the selected bin, or allows
the user to locate and select other symbolic images that may have
been downloaded with the template for particular bins. The user may
also choose an image from several that appear from a search on
keywords relating to the bin or from previous images taken that
reside in other folders. 2081 allows annotation of images or blank
bins, or allows basic editing of selected images such as red-eye
reduction, targeting which area to focus on in the image as it
moves, or resizing the viewable part of an image.
2.2.1 Opportunity
[0060] The opportunity function described in FIG. D11 relates to
items 2000, 2010, 2011, 2020, and 2021 in FIG. D13. One embodiment
is indicated in FIG. D12 as an system where the functions are
already integrated either in hardware, software, or a combination.
In one case digital device is sold to consumers pre-loaded with the
ability view and select templates as well as create an MPV after
selection. In another case a user download software that allows for
template viewing and modification onto a more generalized device
such as a mobile phone, game system, or personal computer that is
capable of downloading software applications and already contains
the necessary hardware to store, view, and utilize templates. There
are also embodiments that utilize pre-existing functionality in the
device, including pre-loaded software for the creation of MPVs,
that utilize download features to further update software that
resides locally on the device for the creation of MPVs or addition
of templates.
[0061] In one embodiment the downloaded software may contain
templates or portions of templates for the user to choose from
locally on the device as in 2010. While not required, there would
likely be navigational utilities to help in the selection of
templates, guiding a user through the selection of moods, themes,
or songs. As an example, a user may key in a mood such as happiness
or melancholy. The device would then search for templates that are
resident on the device or resident on a server to which it can
communicate to select templates that have been identified with that
particular mood. Templates, portions of the templates, or
descriptive information about the templates, such as a song the
template is based on, would then be delivered to the user.
Different methods for ranking and ordering the templates would be
employed, such as which is most popular by purchase in the last
week, which has been rated highest by viewers, or which are
associated with template composers that gained reputations for
creating quality templates. The user would then receive an ordered
list of templates organized one or several of these ranking methods
that provide available templates to choose from. The user may then
view these templates as in 2010 to see what they like best. While
viewing a specific template, the software may provide other guides
such as a suggestion that users who viewed particular templates
ended up purchasing other templates, possibly including the percent
of users who purchased each after viewing the current template.
[0062] A user might also choose to select a theme such as Christmas
or Halloween. The user might undergo a similar process as indicated
in mood, being provided a list of templates ranked by a variety of
methods. The user might also utilize both a theme and a mood to
select the templates to view. In one example a user selects
"Halloween" and then "funny", providing a list of templates that
are closest to these parameters. Other templates that might be
related to "Halloween" and "classic" but not related to "funny"
would not be shown. Other criteria for any search might be that a
template's cost is free, is within a certain price range, or is
freely distributable to others. The same might apply to a
template's music which might be free, within a certain price range,
or be freely distributable. Other options may be available such as
the ability to include advertising within an MPV in order to offset
the cost of the template or music, or templates which are free if
the user provides rights to freely distribute or showcase the
finished MPV to a software provider. The intent of the navigational
utilities is to provide an easy method for users to select a mood,
theme, or song that fits their desired criteria as quickly as
possible and that provides the most utility to the user, be it
popularity, quality, or other criteria.
[0063] Another method for search might be by song. The software may
provide users an list of templates, organized by any of a number of
criteria, such as the highest-rated templates based on a particular
song or artist, the most current Billboard chart toppers, the
highest ranked in any of a number of musical genres such as Country
Music or R&B, the most recently purchased templates, templates
that have had the finished MPV most distributed, songs that have
the most templates, templates ranked as highest quality by one or
more groups, templates that are most relevant to purchases already
made by the user or to a user's demographic as defined by the user,
by the software provider, or by a third party such as a DJ in a
genre or subscribed to by the user. Keywords are also obvious
selection criteria for templates, and might include song names,
musical artist names, musical album names, lyrics associated with
the template, or synonyms of any of these. The software might
provide results based on a match of keywords and other ranking
criteria, such as a blend of the keyword and popularity.
[0064] Such a search selection of templates could occur directly on
the device. Certain templates which are most likely to be purchased
may be preloaded into the software and available for a user
immediately, possibly with a purchase required. Other search
results would likely come from a connection to a user's computer or
to the Internet. The device user could browse the available
templates or browse template information provided from the
Internet, which would be delivered to the device as search criteria
were provided to a server containing template information via the
Internet. The user might then download selected templates for
purchase or for previewing directly onto the device.
[0065] A computer may also be used for template selection. Software
for template selection (and possibly MPV creation or template
editing) might contain many preloaded templates ready for viewing
or use. The user might view these on a PC and select which they
would like to place onto the camera with the intent of creating an
MPV. A computer might also allow for the downloading to a related
device. For example a user might download an MPV or a template to a
pre-registered device such as a camera or a digital frame that has
another means of connecting to the Internet such as through
wireless telephone networks, through wireless connections to a
local area network, or where a device is connected, wired or
wirelessly, to another computer accessible by the Internet such as
that for a family member located elsewhere.
[0066] In many cases, a user will be required to log into an
account prior to accessing or purchasing templates. The account
might contain credits or value that a user has access to for the
purchase of music, a template, or both. It might also contain MPVs
that were purchased or created by the user, or gifted to the user
by another, either as a rental or perpetually licensed gift,
possibly with ability to view future variations or updates to a
given MPV template.
[0067] Another embodiment for MPV creation allows the user to view
pre-created video or image footage and select parts of the footage
to insert into a selected template. In this instance, a user might
download both a template and one or more hours of footage which
they would watch. Such a device need not contain image capture
since the raw image footage would already be provided. The template
might offer suggestions about what images or video to place into
the bins. The images or video might be further editable, such as
cropping, rotating, or changing color. The user would then be able
to create the MPV based on these images. High resolution images may
reside on a server and be accessible during the MPV creation
process or after a required purchase. In such a case the MPV
creation software would note the time of a particular image in the
video being watched and be able to reference the higher resolution
image from this time data. The MPV could then be created on the
device locally or on a server and delivered to the device or to the
user's account for further distribution. The footage viewed by a
user may or may not be related to particular template.
2.2.2 Assist
[0068] The assist function described in FIG. D11 relates to items
2030, 2040, 2041, 2050, 2051, and 2070 in FIG. D13. The goal of the
assist function is to improve the imagery captured and selected by
users by providing users with educational information before and
during the image capture and selection process. Information fed
into the assist function is generally provided or chosen by the
template composer. Some examples of features provided in the assist
function are illustrated in FIGS. D11 and D13.
[0069] FIG. D16 provides an example of how a user might utilize the
templates create by template composers, as indicated in the
Personalization portion of FIG. D1. FIG. D16 illustrates a screen
being utilized by a user who has already created images and wants
to place them into an MPV. The user may have gone through the
process of capturing images with a live camera and placed them into
bins, may have images that came with the template, may have
searched for additional images using a search engine, or may have
accessed images previously taken by the user. This tool allows the
user to easily view lots of images and select the proper image for
the bin, then moving on to the next bin and repeating the process
as needed. The selection process could occur on a portable device,
such as a camera, to augment parts of an MPV that the user desires
to modify with pre-existing pictures.
[0070] 2500 shows folders that contain images (still or video).
Users can add more folders, including websites that contain images.
2502 displays a visual list of images from the folders listed in
2500. Images can be dragged into the Time-synched Energy Template
to replace an existing image. Images can be grouped by "Your Images
Taken" or by pre-loaded "Suggested Images" that are supplied with
the template or taken from a web search based on keywords of the
image. 2506 displays instruction for User of what type of image to
select. 2508 shows lyrics for User of current bin. 2510 shows
currently selected picture from 2502. 2508 displays lyrics for
User. 2514 shows the timeline of images, including at least prior,
current, and next. 2516 shows the current image occupying the
current bin. It is consistent with 2510. 2518 shows the transition
into current image occupying bin. 2520 shows controls for moving
forward/backward in time or to next/prior bin. 2522 shows a
view-size slider which allows images in 2514 to be made larger or
smaller. 2524 shows a volume control slider. 2526 shows a menu
button that brings up additional options including selecting images
to show, moving bins, saving progress, changing screens, turning on
or off optional features (e.g. lyrics) etc.
[0071] The assist function may provide additional features to those
in FIG. D14, D15, and D16. Features, including some of those
mentioned in the earlier Figures, include the following. [0072] A
list which can be viewed on a display, where each item refers to a
bin and includes keywords. The list can be viewed on the device
display, emailed, viewed on a PC, or viewed in another way. [0073]
The ability to view a library of images pre-selected for bins by
template composers, images placed into bins of MPVs previously made
by users, images from local or online libraries, images from online
communities, images from linked friends in online communities, or
images that have been processed and possess visual characteristics,
any of which might or might not be tagged with keywords consistent
with the template composer's instructed keywords. [0074] The
ability to view a list of images pre-selected by a template
composer or third party which has instructions about the positive
characteristics about the image, including what a user should try
to do in composing an image for a particular bin. [0075] The
ability to view multiple images in a bin in an MPV at the same
time, or to view images designated for a single bin in sequence in
a loop, possibly included a prior and/or next image. [0076] The
auto-detection of characteristics within an image, such as a face
that might fit into a pre-defined area of a background image [0077]
Instructions for a user to take multiple images that could be
automatically cropped or super-imposed, possibly in a sequence over
time. Images might be taken in certain portion of the viewfinder,
or be full-sized images which are resized and placed into the
prescribed position. [0078] Instructions that can be toggled on or
off to guide a user in the placement and sizing of a subject.
[0079] Lyrics that can be toggled on or off for a particular bin
[0080] Automatic selection of the point of focus on an image [0081]
Manual selection of the point of focus of an image using
instructions on what to place the focus on, such as a building, a
persons eyes, a river, or other object [0082] Automatic selection
of one image for each bin from a pool of images marked as belonging
to a particular bin to be placed in the bin for the creation of an
MPV, or the ability to watch an MPV that selects from a designated
pool of images for each bin. [0083] A service whereby an image can
be uploaded and commented on, edited, or modified by a third party,
providing the user with guidance about the quality of the image and
what could improve the image. [0084] A service whereby the images
taken by a user, possibly many images that may or may not be
designated for specific bins, would be transmitted to professionals
or third-parties, possibly be edited or cut, and placed into bins
to form an MPV for the user. [0085] Any combination of the
above.
2.2.4 View
[0086] The view function described in FIG. D11 relates to items
2090 in FIG. D13. Users can view MPVs with sample images, view
their own with final images, view with different images being
placed into a single bin to help decide which to select, or viewed
semi-randomly where images are placed into the MPVs according to
some selection criteria such as bins containing multiple images or
such as images in any available library matching tags on the bin.
Viewing is also possible on other devices, such as digital picture
frames, or through output to another device by sending a digital or
analog output signal such as connecting the camera to a TV for
viewing the MPV. Viewing may occur at variable rates of volume
(including no volume), occur at variable rates of speed for faster
or slower playback, utilize standard navigational icons including
play, pause, go to end, go to beginning, fast forward, rewind, and
varying degrees of fast forward and rewind. Users may also watch an
MPV in a mode where the bins can be modified as they are watched,
such as changing a point of focus or amount of zoom, or use
accelerometers, gyroscopes, or other physical movement sensors to
modify the MPV as it is being viewed, such as pulsing to a physical
movement or panning based on turning the device. In some of the
instances, images. higher resolution images, or a rendered video
would need to be downloaded into the viewing device so that the MPV
could be assembled or viewed.
2.2.5 Share
[0087] Users may share an MPV in complete or partially complete
form. One method is to email the MPV file and possibly associated
files, as a movie file or collection of MPV-related files that
would be accessed during playback for construction of the video.
Another method is to upload MPV files to a server where they can be
accessed by others, possibly through an account they have set up on
the server. If license fees are required, users may have already
paid for certain users to be able to view the video, or they may
allow others to purchase necessary rights to view the video.
Purchase fees might allow for ability to modify part or all of the
MPV. An MPV on a server may also be rendered and shared as a
non-modifiable movie or MPV file rather than modifiable MPV files.
Users might also use other transfer techniques such as Bluetooth
technology for wireless transmission or a USB cable to share from
one device to another, or be allowed to burn files onto a CD so
they can be transferred to another user.
[0088] FIG. D19 describes a likely scenario for a user has just
completed personalizing an MPV with his own images. At this point a
user may desire to place a title and share the MPV with friends.
2800 shows an area where User can change title of the video. 2810
shows an area where User can select email addresses of others to
share video with. 2820 shows an area where User can upload video to
backend service and User can access from other computer or
device
[0089] There may be various other embodiments not described here
that achieve the spirit of the invention to accomplish these
desired tasks with the combined objectives of simplicity of use and
maximum intensity of effect. The novel features and advantages of
the invention are described in the next section and capture key
elements of a larger set of embodiments that achieve the spirit of
the invention.
2.3 APPLICATIONS OF MPV BEYOND VIDEO USAGE
[0090] One application of MPVs beyond video creation and viewing is
for video games. The user would either be able to take images in
real time and receive a score for the sequence or individual
images. Alternatively the user might be required to size and place
a rectangle inside existing pictures in order to cut the images,
after which the user would receive a corresponding score based on
how well the image was cut. Many variations could play on this such
as determining the rotation or the point of focus. Scoring an image
could take place automatically or could rely on a service of live
humans that rates the modified image or the chosen effect.
Educational opportunities could arise from this as well, such that
the game is marketed as an educational utility to help improve the
image capture of aspiring photographers or children. Some of these
may come with preloaded footage as described earlier such that the
final images are demonstrated to the user as a collection of
preferred images. In other games, there could be specific points
where the user needs to cut a song according to some criteria they
are judged against, such as certain beats or accents. Users might
be required to move the device according to the effect that is
happening such as down or up or rotated based on movement of an
image.
[0091] Another videogame example is a treasure hunt, where children
are directed to find and capture particular objects. Instructions
might occur by audio instructions or written instructions, or use
images that possess the desired quality the child is searching for
such as "red" or "three" or "building" or "Mickey Mouse". The child
would then take photos of the items as each bin requests, and at
the end of the exercise the video would be created automatically
using the items captured, synched to some type of pre-defined
music. Images of the child might be overlaid with the items found.
Items might also need to fit within certain areas on a screen to
ease in the video editing, and the direction or display screen
would assist the user.
[0092] Videobooks are another application. Videobooks can either
physically display several images from video sequences in a print
form similar to a comic book, or may digitally offer video
sequences similar to an advanced feature digital frame capable of
displaying sound and music. For example, a user might be able to
take screen shots every one second or take series of shots in close
together in a video sequence and then compile with others so that a
story from the book emerges in print form. There might be 20, 50,
100, or more photos in the sequence, likely arranged by bin. Lyrics
or comments might be included for a given bin. Images would likely
contain effects that had been performed in the MPV so that a series
of images, for example, zooms in or pans in similarly to the
MPV.
[0093] The portable device may also be used in the semi-automated
creation of templates. Any type of sensors, such as accelerometers,
could be used to capture human movement as a song occurs. That data
could then be interpreted to provide cuts, pans, zooms, blends, and
other effects. Input devices used to collect this data might be
external to the device but plug into the device with a cable or use
a wireless communication technology such as Bluetooth to send
information to the device.
* * * * *