U.S. patent application number 13/762834 was filed with the patent office on 2014-02-20 for music track exploration and playlist creation.
The applicant listed for this patent is Rahul Kashinathrao Dahule, Shubhangi Mahadeo Jadhav. Invention is credited to Rahul Kashinathrao Dahule, Shubhangi Mahadeo Jadhav.
Application Number | 20140052731 13/762834 |
Document ID | / |
Family ID | 43299641 |
Filed Date | 2014-02-20 |
United States Patent
Application |
20140052731 |
Kind Code |
A1 |
Dahule; Rahul Kashinathrao ;
et al. |
February 20, 2014 |
MUSIC TRACK EXPLORATION AND PLAYLIST CREATION
Abstract
The invention enables music tracks in a playlist to be explored
using a unique music genogram data format. The invention further
enables a user to map an emotional trajectory on the emotion wheel
as a basis for the generation of the playlist with music tracks.
The invention provides for a unique visualization of the playlist
using the emotional wheel representation. A user will be given the
option to specify an initial mood and a destination mood. The
trajectory between the initial mood and the destination mood may be
steered through the emotions falling in-between. The thus obtained
mood trajectory is then populated by music tracks to form a
playlist.
Inventors: |
Dahule; Rahul Kashinathrao;
(Nijmegen, NL) ; Jadhav; Shubhangi Mahadeo;
(Nijmegen, NL) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Dahule; Rahul Kashinathrao
Jadhav; Shubhangi Mahadeo |
Nijmegen
Nijmegen |
|
NL
NL |
|
|
Family ID: |
43299641 |
Appl. No.: |
13/762834 |
Filed: |
February 8, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
PCT/EP2010/061550 |
Aug 9, 2010 |
|
|
|
13762834 |
|
|
|
|
Current U.S.
Class: |
707/740 |
Current CPC
Class: |
G11B 27/34 20130101;
G06F 3/04847 20130101; G11B 27/034 20130101; G06F 3/04842 20130101;
H04N 21/26258 20130101; H04N 21/8113 20130101; G11B 27/105
20130101; H04N 21/4755 20130101; G06F 16/639 20190101; G06F 16/68
20190101 |
Class at
Publication: |
707/740 |
International
Class: |
G06F 17/30 20060101
G06F017/30 |
Claims
1. A computer-implemented method for exploring music tracks, the
method comprising: displaying a graphical representation of a first
music genogram of a first music track, the first music genogram
having a music genogram data structure for defining the music
genogram; receiving a first exploration input indicating a
selection of one of the tags in one of the sections; and if the
first exploration input indicates a pro-tag, displaying a first
link to a second music genogram of a second music track, and/or if
the pro-tag comprises two or more micro-pro tags, displaying a
graphical representation of the decomposition of the pro-tag and a
second link to a third music genogram of a third music track for
each of the micro-pro tags, wherein the music genogram data
structure comprises: sub-segment data identifying one or more
sub-segments to define a decomposition in time of a music track,
each sub-segment having a start time and an end time; and band data
identifying one or more bands to define a decomposition for the
time length of the music track, wherein a cross section of a
sub-segment and a band forms a section, the music genogram data
structure further comprising: tag data identifying one or more tags
in one or more sections, wherein a tag is one of: a deceptive tag
to indicate a surprising effect; a pro-tag to identify a starting
point for an exploration, using a graphical user interface, of
another music genogram based on similarities; a sudden change tag
to indicate a substantial change in scale, pitch or tempo; a hook
tag to identify a unique sound feature in the music track; or a
micro-pro tag to enable a decomposition of the pro-tag.
2. The method according to claim 1, wherein each section is
subdivided in two or more subsections and wherein the tag data
identifies the one or more tags in a subsection.
3. The method according to claim 1, wherein each sub-segment is one
of an intro part, a main vocals part, an instrumental part, a
stanza vocals part or a coda part, and wherein each band is one of
a beat band, a tune band or a special tune band.
4. The method according to claim 1, wherein the first music track
is one of two or more media items in a playlist, wherein each media
item comprises meta-data indicating one or more characteristics of
the media item, the method further comprising: displaying a
graphical representation of an emotional wheel, the emotional wheel
being a two dimensional Cartesian coordinate system based model for
classification of emotions wherein emotions are located at
predefined coordinates; receiving a first input indicating a
starting point of a mood trajectory in the emotional wheel, the
starting point corresponding to an initial mood at one of the
coordinates; receiving a second input indicating an end point of
the mood trajectory in the emotional wheel, the end point
corresponding to a destination mood at one of the coordinates;
defining the mood trajectory by connecting the starting point to
the end point via one or more intermediate points, the intermediate
points corresponding to one or more intermediate moods; displaying
a graphical representation of the mood trajectory in the graphical
representation of the emotional wheel; selecting the media items by
searching in the meta-data for emotion characteristics or mood
characteristics that match the initial mood, the intermediate moods
and the destination mood, respectively; and creating the playlist
of media items in an order from initial mood to destination
mood.
5. The method according to claim 4, wherein the starting point, the
end point and the intermediate points are predefined to form a
predefined mood trajectory, the method further comprising receiving
a third input to select the predefined mood trajectory.
6. The method according to claim 4, further comprising: receiving a
fourth input indicating a change in coordinates of one or more of
the intermediate points; and redefining the mood trajectory by
connecting the starting point to the end point via the changed
intermediate points.
7. The method according to claim 4, further comprising: calculating
a first series of intersecting circular sections along the mood
trajectory, wherein each circular section in the first series has a
center point on the mood trajectory; and calculating a second
series of circular sections, wherein each circular section in the
second series has a center point at an intersection point of two
circular sections in the first series, wherein the first series of
circular sections and the second series of circular sections
together form an area around the mood trajectory, or further
comprising: calculating an area around the trajectory using a
probabilistic distribution function or distance function to form a
regular or irregular shaped area around the mood trajectory, and
wherein selecting the media items by searching in the meta-data for
emotion characteristics or mood characteristics that match the
initial mood, the destination mood, and moods with coordinates
within the area around the trajectory, respectively.
8. The method according to claim 4, further comprising receiving a
fifth input indicating one or more media characteristics and/or
receiving a sixth input indicating a maximum number of media items
in the playlist and/or a maximum time-length of the playlist, and
wherein the selecting of the media items is restricted to the one
or more further media characteristics and/or the maximum number of
media items and/or the maximum time-length.
9. The method according to claim 4, further comprising receiving a
seventh input indicating a weight factor for one or more of the
starting point, the intermediate points and the end point, and
wherein in the playlist the number of media items selected for the
starting point, the intermediate point or the end point for which
the weight factor is received is dependent on the value of the
weight factor.
10. The method according to claim 9, further comprising displaying
in the graphical representation of the mood trajectory one or more
resized points having a size indicating a probability that media
items are selected at the coordinates of the resized points.
11. The method according to claim 4, further comprising storing
shock data comprising an indication of a media shock applied by a
user to a particular media item in the playlist and an indication
of a relative position of the particular media item in the playlist
or mood trajectory, and wherein the selecting of the media items
uses the stored shock data to influence the probability that the
particular media item is selected.
12. A computer program product comprising software code portions
configured for, when run on a computer, executing the method steps
according to claim 1.
Description
FIELD OF THE INVENTION
[0001] The present invention relates to the exploration of music
tracks in a playlist and the creation of playlists. More
specifically, the invention relates to a computer-implemented
method for the creation of a playlist, a music genogram data
structure and a computer implemented method to explore music tracks
using the music genogram data structure.
BACKGROUND
[0002] Driven by the rapid expansion of the Internet, media items,
such as music, video and pictures, are becoming available in
digital and exchangeable format more and more. It is to be expected
that at some point in time substantially all music will be
available online, e.g. through music portal websites. A music track
is a single song or instrumental piece of music. With potentially
billions of music tracks from new and existing artists being added
to the worldwide online available music collection on a monthly
time scale, it is becoming very difficult to find favorite music
tracks or new music tracks to ones liking from the vast collection
of music. Similarly, the number of videos and pictures available
through online video and picture collections are growing.
[0003] To enable searching for and/or selection of particular music
track, artist, music album, etcetera, digitized music is typically
provided with textual information in the form of meta-data. The
meta-data typically includes media characteristics like artist,
title, producer, genre, style, composer and year of release. The
meta-data may include classification information from music
experts, from friends or from an online community to enable music
recommendations.
[0004] A playlist is a collection of media items grouped together
under a particular logic. Known online music portals, such as e.g.
Spotify (www.spotify.com) and Pandora (www.pandora.com) offer tools
to make, share, and listen to playlists in the form of sequences of
music tracks. Individual tracks in the playlist are selectable from
an online library of music. The playlist can be created such to
e.g. reflect a particular mood, accompany a particular activity
(e.g. work, romance, sports), serve as background music, or to
explore novel songs for music discoveries.
[0005] Playlist may be generated either automatically or manually.
Automatically created playlists typically contain media items from
similar artists, genres and styles. Manually selection of e.g. a
particular music track is typically driven by listening to a
particular track or artist, a recommendation of a track or artists
or a preset playlist. It is possible that a user provides a
manually selected track or particular meta-data as a query and that
a playlist is generated automatically as indicated above in
response to this query.
[0006] Known tools for finding media items and creating playlists
do not take into account different tastes of individual users,
which is further compounded by additional differences on their
demographics. Moreover, a user's response to e.g. music typically
depends on the type of user. Four types of users can generally be
identified: users indifferent to music; users casual to music;
users enthusiastic to music; and music savants. Indifferent users
typically would not loose much sleep if music would cease to exist.
Statistically 40% of users in the age group of 16-45 are of this
type. Casual users typically find that music plays a welcome role
but other things are far more important. Their focus is on music
listening and playlists should be offered in a transparent manner.
Statistically 32% of users in the age group of 16-45 are of this
type. Enthusiastic users typically find that music is a key part of
life but it is balanced by other interests. Their focus is on music
discovery and playlists may be created using more complex
recommendations. Statistically 21% of users in the age group of
16-45 are of this type. Savant users typically feel that everything
in life is tied up with music. Their focus is on the question
"what's hot in music?". Statistically 7% of users in the age group
of 16-45 are of this type. Known tools typically target a specific
type of user and do not take into account different types of
users.
[0007] It is known that emotions can be used to generate a
playlist. Users or experts, such as music experts, may e.g. add
mood classification data to the meta-data to enable generation of a
playlist with tracks in a particular mood. WO2010/027509 discloses
an algorithm that produces a playlist based on similarity metrics
that includes relative information from five time-varying emotional
classes per track.
[0008] A method for personalizing content based on mood is
disclosed in US2006/0143647. An initial mood and a destination mood
are identified for a user. A mood-based playlisting system
identifies the mood destination for the user playlist. The mood
destination may relate to a planned advertisement. The mood-based
playlisting system has a mood sensor such as a camera to provide
mood information to a mood model. The camera captures an image of
the user. The image is analyzed to determine a current mood for the
user so that content may be selected to transition the user from
the initial mood to the destination mood responsive to the
determined mood of the user. The user has no direct control over
the desired mood destination and the transition path. Moreover, use
of a camera to capture the current mood may result in a less or
non-preferred playlist, as there can be a mismatch between the
desire of the user to reach a certain mood and the mood reflected
per his/her facial expressions.
[0009] There is a need for a user friendly method for exploring
music and creating and manipulating mood based playlists for
different types of users and from a vast and growing amount of
available online media items.
SUMMARY OF THE INVENTION
[0010] According to an aspect of the invention a
computer-implemented method is proposed for exploring music tracks.
The method comprises displaying a graphical representation of a
first music genogram of a first music track. The first music
genogram has a data structure. The method further comprises
receiving a first exploration input indicating a selection of one
of the tags in one of the sections. The method further comprises,
if the first exploration input indicates a pro-tag, displaying a
link to a second music genogram of a second music track, and/or, if
the pro-tag comprises two or more micro-pro tags, displaying a
graphical representation of the decomposition of the pro-tag and a
link to a third music genogram of a third music track for each of
the micro-pro tags. The music genogram data structure comprises
sub-segment data identifying one or more sub-segments to define a
decomposition in time of a music track. Each sub-segment has a
start time and an end time. The music genogram data structure
further comprises band data identifying one or more bands to define
a decomposition for the time length of the music track. A cross
section of a sub-segment and a band forms a section. The music
genogram data structure further comprises tag data identifying one
or more tags in one or more sections. A tag can be a deceptive tag
to indicate a surprising effect. The tag can be a pro-tag to
identify a starting point for an exploration of another music
genogram based on similarities. The tag can be a sudden change tag
to indicate a substantial change in scale, pitch or tempo. The tag
can be a hook tag to identify a unique sound feature in the music
track. The tag can be a micro-pro tag to enable a decomposition of
the pro-tag.
[0011] Thus, music tracks can be explored in a user friendly
way.
[0012] The embodiment of claim 2 advantageously enables more
precise tagging.
[0013] The embodiment of claim 3 advantageously enables the music
genogram to be particularly suitable for Indian music.
[0014] The embodiment of claim 15 advantageously enables exploring
music tracks from a mood based playlist to be explored in a user
friendly way. The mood based playlist can be created in a user
friendly way.
[0015] Examples of a media item are a music track, a video or a
picture. The playlist may comprise a mixture of music tracks,
videos and/or pictures.
[0016] The embodiment of claim 5 advantageously enables a quick an
easy creation of a mood trajectory.
[0017] The embodiment of claim 6 advantageously enables a user to
manipulate the mood trajectory.
[0018] The embodiment of claim 7 advantageously enables more media
items to fulfill the selection criteria thus become selectable when
creating the playlist.
[0019] The embodiment of claim 8 advantageously enables a user to
place restrictions on the playlist and/or the media items in the
playlist. Examples of a media characteristic are artist, title,
producer, genre, style, composer and year of release.
[0020] The embodiment of claim 9 advantageously enables a user to
increase the number of media items for a specific mood in the
playlist.
[0021] The embodiment of claim 10 advantageously enables a user to
see where in the mood trajectory a decrease or increase in the
number of media items with specific media characteristics can be
expected.
[0022] The embodiment of claim 11 advantageously enables future
creation of a playlist to take into account past user actions
related to a particular media item.
[0023] According to an aspect of the invention a computer program
product is proposed. The computer program product comprises
software code portions configured for, when run on a computer,
executing one or more of the above mentioned method steps.
[0024] Hereinafter, embodiments of the invention will be described
in further detail. It should be appreciated, however, that these
embodiments may not be construed as limiting the scope of
protection for the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] Aspects of the invention will be explained in greater detail
by reference to exemplary embodiments shown in the drawings, in
which:
[0026] FIG. 1 shows a graphical representation of a prior art mood
wheel;
[0027] FIG. 2 shows a graphical representation of mood trajectories
of an exemplary embodiment of the invention;
[0028] FIG. 3 shows a graphical representation of a mood trajectory
of an exemplary embodiment of the invention;
[0029] FIG. 4 shows a graphical representation of a mathematical
algorithm of an exemplary embodiment of the invention;
[0030] FIG. 5 shows a graphical representation of a mood trajectory
with weighted points of an exemplary embodiment of the
invention;
[0031] FIG. 6 shows a graphical representation of a mood trajectory
with shock points of an exemplary embodiment of the invention;
[0032] FIG. 7 shows a graphical representation of a music genogram
of an exemplary embodiment of the invention;
[0033] FIG. 8 shows a collection of music genogram tags of an
exemplary embodiment of the invention;
[0034] FIG. 9 shows a graphical representation of exploring a music
genogram of an exemplary embodiment of the invention as may be
displayed on a user interface;
[0035] FIG. 10 schematically shows a non-real time exploration
initiative of an exemplary embodiment of the invention;
[0036] FIG. 11 schematically shows a real time initiative per music
track in the emotional wheel of an exemplary embodiment of the
invention;
[0037] FIG. 12 shows a graphical representation of a mapping of
real time and non-real time initiatives on the emotional wheel of
an exemplary embodiment of the invention;
[0038] FIG. 13 schematically shows steps of a method of an
exemplary embodiment of the invention;
[0039] FIG. 14 schematically shows steps of a method of an
exemplary embodiment of the invention;
[0040] FIG. 15 schematically shows steps of a method of an
exemplary embodiment of the invention; and
[0041] FIG. 16 schematically shows steps of a method of an
exemplary embodiment of the invention.
DETAILED DESCRIPTION
[0042] In FIG. 1 a model for classification of emotions is shown,
known as an emotional wheel 10. The emotion wheel 10 is founded on
psychology results and views of scientists like Plutchik in 1980,
Russell in 1980, Thayer in 1989 and Russell and Feldman Barrett in
1999. The emotional wheel 10 captures a wide range of significant
variations in emotions in a two dimensional space. Emotions can be
located in the two dimensional Cartesian system along the various
intensities of emotions and degree of activation. The x-axis
defines the level of valence. The y-axis defines the level of
arousal. Each emotional state can be understood as a linear
combination of these two dimensions. The four quadrants of the
emotional wheel identify the primary emotions joy, anger, sadness
and neutral. Secondary emotions, providing a more detailed
classification, are indicated in italics and include the emotions
pleased, happy, interested, exited, alarmed, annoyed, nervous,
afraid, angry, furious, terrified, sad, depressed, bored, sleepy,
calm, relaxed, content and serene. It is possible to define other
and/or more primary and/or secondary emotions.
[0043] The invention enables a user to map an emotional trajectory
on the emotion wheel as a basis for the generation of a playlist.
Moreover, the invention provides for a unique visualization of the
playlist using the emotional wheel representation. A user will be
given the option to specify an initial mood and a destination mood.
The trajectory between the initial mood and the destination mood
may be steered through the emotions falling in-between. The thus
obtained mood trajectory is then populated by music tracks to form
a playlist.
[0044] FIG. 2 shows an emotional wheel as shown in FIG. 1. For
readability purposes the labels indicating the axis, primary
emotions and secondary emotions as shown in FIG. 1 are not shown in
FIG. 2. A user may select the starting point 1 as starting point
for a playlist. The starting point typically corresponds to a
current emotion of the user. Furthermore the user may select the
end point 2 as the end point for the playlist. Music tracks are
then populated along a trajectory in-between the starting point 1
and the end point 2, e.g. trajectory 11, 12 or 13, which tracks
fall exactly on the trajectory and/or near-exact locations within
an incremental distance of the trajectory.
[0045] A graphical user interface showing the emotional wheel may
be presented, through which the user first affixes a starting point
by using a mouse pointer to click on the spot on the emotional
wheel that resembles a percepted prevailing mood or emotion. The
end-point resembling a desired future mood or emotion is then also
affixed in a similar fashion. Next a mood trajectory is drawn
between the two points, either as a simple straight line or in a
more complex form such as trajectory 11, 12 or 13. The trajectory
may be altered using the mouse pointer to form a desired path
through desired moods or emotions. If e.g. the automatically
fetched trajectory resembles trajectory 11, this trajectory may be
changed by e.g. moving the mouse pointer from a top left location
to a bottom right location on the graphical user interface.
[0046] It will be understood that the user interface can be
programmed to accept other mouse gestures to achieve a similar
effect. Alternatively the user interface may use a touch screen
interface to control the pointer, or any other man-machine
interaction means.
[0047] As shown in FIG. 3, it is possible that a music trajectory
14 between a starting point 1 and an end point 2 self intersects at
one or more points 3, thus resulting in one or more recurring
emotions along the trajectory.
[0048] Instead of selecting a starting point and an end point it is
possible to have the user interface display one or more predefined
mood trajectories obtainable from a backend database. Examples of
predefined mood trajectories are "from sad to happy" and "from
excited to calm". The selected predefined mood trajectory will be
displayed on the emotional wheel and may be altered as described
above.
[0049] Once the mood trajectory is selected a backend music engine,
which is implemented as a software module or a set of software
modules, uses a mathematical algorithm to populate the music tracks
along the trajectory. The backend music engine may reside at a
server, such as the online music portal server, or be partially or
entirely running on the client device where the user interface is
active.
[0050] An example of a mathematical algorithm is the virtual
creation of circular sections along the trajectory as visualized in
FIG. 4. For readability purposes the emotional wheel is not shown.
The circular sections are used to smoothen the trajectory 15 and
define an area wherein the to be selected music tracks are to be
found. A first series of circular sections 21 are calculated having
their centre points lying along the trajectory 15. A second series
of circular section 22 are calculated having their center points at
the points of intersection of the first series circular sections.
Music tracks residing within the area of the first and second
series of circular sections cover the music tracks being selectable
along the mood trajectory 15 for insertion in the playlist.
[0051] Instead of calculating circular sections as shown in FIG. 4,
the area around the trajectory 15 may be calculated by a
probabilistic distribution function virtually forming a regular or
irregular shaped area following the trajectory 15.
[0052] Another example of a mathematical algorithm uses a distance
function to virtually create one or more additional trajectories
following the shape of the mood trajectory may be calculated at a
predefined distances to a mood trajectory. The mathematical
algorithm is then applied to the mood trajectory and the one or
more additional trajectories. The thus obtained areas along the
different trajectories together form a larger area. Music tracks
residing within the larger area cover the music tracks being
selectable along the mood trajectory for insertion in the
playlist.
[0053] The playlist may be refined in various ways. Refinements may
have a real-time impact on the playlist.
[0054] Through the user interface a user may be given an option to
restrict the selectable music tracks using available meta-data. The
meta-data is e.g. used to select music tracks from a single genre
or a combination of two or more genres, an artist, overlapping of
two or more artists, a year of release or a time frame for release
years, or any other meta-data or combinations of meta-data.
[0055] The user interface may be used to display the total
time-length of the music tracks and/or the total number of music
tracks sequenced to be played in the playlist. An option may be
displayed to make the playlist shorter using input parameters such
as the total number of songs and/or the total time-length of the
playlist.
[0056] The user interface may display an option to partially play
the music tracks or a selection of the music tracks in the
playlist. An option is e.g. displayed to play e.g. 30%, 50% or 100%
of the total time-length of each music track. This way the total
playlist can be played with short-play. Additionally there may be
an automatic suggestive option to affect only selective tracks in
the playlist. Through this option e.g. the most liked music tracks
will be played for a longer duration while music tracks with a
predefined low rating will be short-played.
[0057] Through the user interface the user may be given an option
to rate, save, share, favorite, ban and/or comment self constructed
trajectories. These self constructed trajectories related actions
may be logged to determine a level of interactiveness in exploring
music before music tracks are actually selected or played.
[0058] There may be a weighted distribution option along the
trajectory path on different emotions along with different
meta-data elements like genres and artists. With this option it
possible to selectively amplify different types of weights falling
along the trajectory. An example of how this may be visualized in
the user interface is shown in FIG. 5. Along the mood trajectory 16
one or more points 5, 6 may be added using the mouse pointer. The
size of points 4, 5 and 6 at the particular moods or emotions
defines the probability that music tracks will be selected at the
respective parts of the trajectory. In FIG. 5 points 4 are the
smallest, point 5 is made bigger and points 6 are made biggest.
[0059] The mood trajectory between the starting point and the end
point may be automatically calculated taking into account user
specific mood characteristics. Per user a backend learning engine,
which is implemented as a software module or a set of software
modules and typically runs on a server such as the online music
portal server, may log music tracks selected or played for a
particular mood and for transitioning from one mood to another
mood. This enables the learning engine to make predictions in
desired mood transitions to get from the starting mood to the end
mood. The calculation of the mood trajectory may use various
criteria such as music discoveries and/or subjective and cognitive
factors.
[0060] The training algorithm of the backend learning engine may
log the relative positioning of a music track in a playlist and
capture allied unfavorable and favorable musical shocks, possibly
in real-time. In this context musical shocks are activities such as
rating, favorite, skipping, banning and emotionally tagging music
tracks. An unfavorable shock is e.g. skipping or banning of a music
track. A favorable shocks is e.g. to favorite a music track. A
continual part of the mood trajectory contains no musical shocks
while a disturbed part of the mood trajectory contains one or more
shocks.
[0061] FIG. 6 shows an example of a mood trajectory 17 with shocks
10 at several locations on the trajectory. Tag points 9a-9d are
added to distinguish continual from disturbed trajectories. For
readability purposes only one shock has a reference number 10.
Between the starting point 7 and tag-point 9a no shocks are
recorded. The partial trajectory between points 7 and 9a is thus a
continual trajectory. Between tag-point 9a and tag-point 9b
thirteen shocks are recorded. The partial trajectory between points
9a and 9b is thus a disturbed trajectory. Between tag-point 9b and
tag-point 9c and between tag-point 9c and tag-point 9d no shocks
are recorded. The partial trajectories between points 9b and 9c and
between points 9c and 9d are thus continual trajectories. Between
tag-point 9d and end point 8 three shocks are recorded. The partial
trajectory between points 9d and 8 is thus a disturbed
trajectory.
[0062] When a partial mood trajectory is automatically generated
the backend learning engine may be queried to determine if one or
more shocks have previously been recorded or if the partial mood
trajectory is known to be a continual or disturbed trajectory. The
shock information may be used to avoid predictive leads from the
earlier reactions. For example, music tracks along a disturbed
trajectory will have lesser probability to populate when the shock
is unfavorable.
[0063] The selection of music tracks along the mood trajectory may
use personalized meta-data or other forms of personalized music
characteristics. A structured personalization method will be
discussed, but it is to be understood that the invention can make
use of any form of personalized music characteristics that has been
created in any other manner.
[0064] The personalized music characteristics method provides a way
to tag music tracks with moods or emotions experienced or felt by a
user when listening to a particular music track. The following
basic format is used to define the mood or emotion:
[0065] "I recall ______ (Box1) when ______ (Box2)"
[0066] When listening to a particular song, a user could e.g.
choose "love" for Box1 and "I had my first date" for Box2, thus
creating the line "I recall love when I had my first date".
[0067] The Box1 information is related to primary and/or secondary
emotions, such as the emotions of the four quadrants and the
emotions within the four quadrants as shown in FIG. 1. To select
the Box1 emotion, the user interface displays the emotional wheel
allowing the user to point and click an emotion. Alternatively
emotions are shown in any other representation format from which a
user may select an emotion using the user interface.
[0068] Box2 information is used to further personalize the music
characteristics. Box2 typically describes a situation from the past
or future. Since there are substantially infinite possible
situations, Box2 preferably allows a free text input.
[0069] Each of the two boxes Box1 and Box2 may be linked to
pictures. Since the Box1 possibilities are predefined using the
emotional wheel logic, there can be a database on the server with
one or more pictures for each of the emotional tags. The Box1
pictures may be randomly recalled and displayed along with the text
if the user has personalized a music track. For Box2 an option may
be presented to upload a personal picture. Also this uploaded
picture can be displayed along with the text if the user has
personalized a music track.
[0070] Over time perceived emotions or moods may change for a
particular music track. Hereto the user can be given the
possibility to alter the Box1 and/or Box2 information for a
particular music track. Preferably the server stores not only the
current personalized music characteristics of a music track but
also past characteristics that have been modified.
[0071] A third information element Box3 may be added to the
personalized music characteristics to enable addition of a time
factor. The time factor limits the perceived emotion or mood to the
selected time. The Box3 information enables e.g. selection of one
of the following elements: early morning, morning, mid-morning,
afternoon, evening, night/bed time, summer time, winter time,
drizzling rains, pouring rains and spring time. Any other moments
in time may be defined for the purpose of Box3.
[0072] Music characteristics created through the structured
personalization method may be used by the backend learning engine
to learn a cognitive pattern of the user. More specifically,
causation and effectuation can be systematically predicted along a
chain reaction from a particular stimulus. Furthermore, the backend
trajectory engine may use this information to calculate a chain of
emotions or moods from a current emotion to aid the automatic
creation of the mood trajectory from the starting point to the end
point through intermediary calculated points.
[0073] When listening to a particular music track in the playlist a
side step may be made to explore other music that is somehow (i.e.
through music genograms, as will be explained) related to the
currently playing music. Hereto the user interface may display a
button to open a music genogram of the currently playing music
track, from which other music tracks may be explored. The other
music tracks that are explored using the music genograms do not
necessarily match the current mood along the mood trajectory. At
any point in time the user may leave the music exploration path and
return to the playlist as defined by the mood trajectory.
[0074] The music genogram is a `genetic structure` of a music
track. In FIG. 7 an example of a typical music genogram 30 for an
Indian music track is shown. The gene structure may be different
for music tracks originating from different geographies or for
music in different genres.
[0075] The music genogram 30 divides the music track in seven
sub-segments in time: an intro part 34, followed by a main vocals
part 35 (e.g. mukhada in an Indian parlance), followed by an
instrumental part 36, followed by a stanza vocal part 37 (e.g.
antara in an Indian parlance), followed by another instrumental
part 36, followed by another stanza vocal 37 (e.g. antara in an
Indian parlance), followed by a coda part 38. The length of each
sub-segment may vary.
[0076] The music genogram 30 further divides the music track in
three different horizontal bands: a beat band 31, a tune band 32
and a special tune band 33. The beat band 31 addresses musical
features related to rhythm and percussion. Base and acoustic guitar
strokes which form a rhythmic pattern are also included in the beat
band. The tune band 32 includes tune attributes in vocals and
accompanied instrumentals. The special tune band 33 relates to
special tones such as chorus, yodel, whistle, use of different
language, whispering, breathing sounds, screaming, etcetera. Each
of the three bands 31, 32, 33 is divided equally in the seven
sub-segments described above.
[0077] Thus, in the example of the music genogram 30 there are 21
unique sections. Other music genograms may have different
sub-segments and/or bands resulting in a different number of unique
sections. Each section may be further subdivided into subsections,
e.g. in three subsection to identify a beginning subsection, a
middle subsection and an end subsection of that section.
[0078] One or more unique sections of the music genogram can be
tagged using one or more of the following types of tags. The
individual tags are shown in FIG. 8 for identification purposes.
Pro-tags 42 are used to explore similarities. Micro-pro tags 45 are
used as a decomposed part of the pro-tag 42 to explore
similarities. Hook factor tags 44 are used to trigger novelty.
Deceptive tags 41 are used to trigger serendipity. Sudden change
tags 43 are used to indicate a sudden change in beat (e.g. a 30% or
more change in tempo), in scale (e.g. going from a C scale to a C#
scale or going from whispering to screaming) and to trigger
similarity. Sudden change tags 43 capture substantial change in the
scale/pitch or the tempo of the music track. The pro-tags 42,
micro-pro tags 45 and sudden change tags 43 are used to find
similarities between the current music track and another music
track. The hook tag 44 indicates a unique feature in the music
track that catches the ear of the listener. The deceptive tag 41 is
typically allocated to intro parts. After listening to an intro
part the instrumentals and/or vocals used in the intro part may
tempt a user (in anticipation) to expect or explore another music
track with a familiar tune. This may result in the user ending up
listening to a completely different music track. Each type of tag
can be visualized by a unique identification icon as shown in FIG.
8 to enable displaying of the music genogram in the user interface.
Instead of the icons shown in FIG. 8 any other icon may be used to
visualize the tags.
[0079] In the music genogram 30 of FIG. 7 tags are added to various
sections of the music track. The music genogram 30 including tags
may be displayed in the user interface as shown in FIG. 7. A
deceptive tag 41 is located in the intro part 34 of the beat band
31. Seeing this deceptive tag 41 may suggest or force the user to
think that there is an intelligence connected to this section.
Based on the characteristics of the deceptive tag the user will
typically expect a different music track than normal. A pro-tag 42
is located in the intro part 34 of the tune band 32, which is
further decomposed into two micro-pro tags 45 each of which is
connected to other similar elements. The pro-tag 42 may e.g. be a
combination of a trumpet and violin. This combination may have been
used subtly or obviously in other music tracks. It is likely that a
trumpet playing style or a violin playing style is used in other
music tracks. In the example music genogram 30 a trumpet and a
violin form two micro-pro-tags 45. Icon 46 indicates that the tag
intelligence, the pro-tag 42 in this case, can be expected in the
end sub-section. To indicate the beginning sub-section or the
middle sub-section icons 48 and 47 may be used, respectively.
Another pro-tag 42 is located in the second instrumental part 36 of
the beat band 31. This pro-tag 42 is not affiliated to any
micro-tag. A hook 44 of the music track is located in the main
vocals part 35 of the special tune band 33. It indicates that the
vocals of the song are characterized by special tunes such as
whistling or gargling or a combination thereof. A sudden change tag
43 is located in the second instrumental part 36 of the special
tune band 33. This tempts one to expect that there could be a
change in scale e.g. with the chorus effect during that
section.
[0080] Except for hook tags, each tag may have a connection to one
or more other music tracks to enable exploration of other music
tracks. FIG. 9 shows an example of how this may be displayed in the
user interface. The music genogram 30 of FIG. 7 is shown in FIG. 9,
together with connections to music genograms of six other music
tracks via clickable buttons 51, 52, 53, 54, 55 and 56,
respectively. In the example of FIG. 9 the pro-tag 42, on a
combination of violin and trumpet, in the intro part of the tune
band 32 is connected to three different music tracks, whose music
genograms can be selected by clicking one of the top-most buttons
51, 52 and 53, respectively. The pro-tag 42 can be further
decomposed into two micro-tags each of which is further connected
to three other music tracks. The micro-tag 45 related to the violin
is connected to a fourth music track, music genogram of which can
be selected by clicking button 54 next to the violin indicator 61.
The micro-tag 45 related to the trumpet is connected to a fifth and
sixth music track, whose music genograms can be selected by
clicking buttons 55 and 56, respectively, next to the trumpet
indicator 62.
[0081] When exploring other music tracks through the music genogram
connections the selected music track may be played or added to the
playlist, either automatically or manually. The backend learning
engine may be configured to constantly monitor users for the kind
of music genogram tags that are explored and recommend the user
what more to explore.
[0082] Thus, in a music genogram the positioning and the
connections of the tags in a master music track (i.e. the music
track that is currently being played or explored) are shown. A
vector format may be used to store the connections in a database.
The following vector format is preferred, but any other format may
be used:
TABLE-US-00001 { node-1, node-2, connection(s) of node-1 and
node-2, weightage(s) over respective connections }
[0083] Herein node-1 identifies the master music track and node-2
identifies a slave music track to which a connection is made. The
connections information identifies the locations of the connecting
tags. Tag coordinates for node-1 include an identity of the master
music track, an indication of the section in the music genogram of
the master music track and a tag-type identifier. Tag coordinates
for node-2 include an identity of the slave music track, an
indication of the section in the music genogram of the slave music
track and an indication whether a connection is made to a pro-tag
only or also to a micro-tag of the pro-tag. Similarity tag(s) of
the music genogram may be weighed over the connection(s) with
weights e.g. ranging from 1 to 5, where 1 indicates an exact or
very obvious match and 5 indicates a very subtle match.
[0084] The positioning and the affiliated tag connections are
typically added manually by a music expert or may be automated with
algorithms, and stored in a meta-dataset. The meta-dataset is
typically stored in a database and may be formatted as a positional
matrix indicating the functional connections between music tracks.
The matrix reflects the connections between similar music
tracks.
[0085] User may be given the possibility to modify a music genogram
or add connections, but preferably such modifications are to be
moderated by a music expert before storing the results in the
meta-dataset. The meta-dataset forms a relationship matrix for the
music tracks. Assigning music attributes as a function of building
block of a music track using the music genogram structure enables
to learn exploiting similarities and novelties in music tracks by
other musicians or artists. A single music attribute can be used
subtly or obviously anywhere in a music track and in combination
with other music attributes. For example a particular tune could be
used in two songs with different/distinguishable construction/built
up of building blocks with respect to their music genogram
structure. This helps users to discover how the same tune can be
used to create a different effect when used in different
constructions of music genograms.
[0086] To illustrate the use of music genograms three different
options to explore music tracks will be described. Consider a
pre-populated playlist of the 5 different music tracks. These are
master music tracks indicated by M1, M2, M3, M4 and M5. Expect that
the music tracks will be played in the order of M1, M2, M3, M4, M5.
M2 is being currently played and the user is offered the music
genogram of M2 in the user interface. The music genogram is tagged
with three distinct tags which are connected to three distinct
slave music tracks S21, S22, and S23. In the indication of the
slave music tracks the first digit indicates the master music track
and the second digit indicates the slave music track. By moving the
mouse pointer over a genogram tag of M2, only one slave genogram
(i.e. S21, S22 or S23) will be displayed depending on the selected
tag. The master genogram for M2 remains displayed.
[0087] In the first option music tracks are explored and discovered
in the context of the master music track. The objective of the
first option is to have the user intervene to play selected slave
music tracks with only overlapping connections with respect to the
master music track. Following the master genogram of M2, two slave
genograms of S22 and S23 are selected to explore. The system keeps
track of explored slave music tracks and at this point in time it
knows that S21 is still to be explored. Hence, the master genogram
of M2 is marked to be incompletely explored. When all the slave
connections have been selected for exploring (in this case the
three slave genograms S21, S22 and S23) then the genogram
exploration of M2 is marked to be completed. After selecting S22
and S23 the playlist is updated from {M2, M3, M4} to {M2, S22, S23
.mu.M3 .mu.M4}. It is noted that M1 is not in the playlist because
it has been played already. M2 is still in the playlist as it is
currently being played. M2, S22, S23, M3 and M4 will be played one
after the other. Only connecting tags of the slave genogram(s) with
respect to the master genogram are displayed.
[0088] In the second option music tracks are explored and
discovered instantaneously in the context of a master music track.
The objective of the second option is to give the user instant
gratification. The second option is particularly suitable for
expert users intending to study exact setting of a music track. M2
is being played and S21 is recalled to discover the connection. S21
will start playing fulfilling the following criteria. The
positional section of the connecting tag in S21 is to be located,
for example BB4 (fourth part of the beat band). The parts before
and after the located tag are identified by adding 1 and -1 to the
part number. This gives BB3 and BB5. S21 will start playing for the
time range spanning the three parts BB3, BB4 and BB5. Only
connecting tags of the slave genogram(s) with respect to the master
genogram are displayed.
[0089] In the third option music tracks are explored and discovered
along the long tail of master to slave to slave's slave, etcetera,
until the user intervenes. The objective of the third option is to
give the user an instant gratification. The third option is
particularly suitable for savant users. M2 is being played and S21
is recalled to discover the connection. S21 will start playing
using the criteria shown for the second option. The difference with
the second option is that all tags of S21 will be displayed. In
other words, the displayed genogram of S21 not only includes
overlapping tags with respect to the master music track M2, but
also includes tags overlapping with other music tracks.
Non-overlapping tags of S21 connected to the other music tracks
will be discovered in this option.
[0090] The music genogram recommendation system has many
advantages. Seeing and exploring descriptive tags of the music
genogram has a significant effect on stimulating a curiosity in the
most logical ways. This offers instant gratification. It offers a
novel, contextual, engaging and structured recommendation that
renders a truly transparent and steerable navigation through music
tracks. The music genogram representation creates a paradigm shift
in conventional recommendation systems. The music genogram captures
music essence at macro and micro elements of music tracks. The
active and transparent recommendation structure lets user
anticipate the type and positioning of the recommended features. It
helps the user to discover the recommendations in a systematic way
thereby avoiding randomized or hitting-in-the-dark discoveries of
music. The recommendation method enables systematically discovering
in a huge volume of undiscovered content. It is highly interactive
and transparent to the user and the user can have lots of control
over choosing the recommended content. The logically designed
interaction with the tags of the music genogram can lead the user
to steering towards the long tail of the relevant content. Apart
from items of similar features for music track discovery, the music
genogram includes novel and serendipitous recommendation elements.
This aspect inspires users to gain trust over the recommendation
method. Users can muse over the recommendation and grow their
learning potential in music. The size of descriptive tags may be
directly proportional to the strength of the overlap between the
elements. This is useful for predictive anticipation on the
functional connection(s) of the connecting tags. A decision tree on
the exploration can further be mapped and studied. Each of the
descriptive tags may be rated (like/dislike). Once the descriptive
tags in the music genogram are explored completely by the user and
connecting music tracks are listened to, tags then become grey (or
any other indication is given to the tag). This way music genogram
discovery can be evaluated as either incomplete or complete.
Furthermore, the following metrics are envisaged in the algorithm
that tells about the personalized liking of the user on the
recommended items. Metric 1: Number of explored items/number of
recommended items. This metric indicates the discovering initiative
on a quantitative basis. Metric 2: Number of explored items of the
similar type/number of recommended items of the similar type. This
metric indicates the discovering initiative of a user on a
qualitative basis and provides for similarity exploration for
subtle similarities and/or obvious similarities, novelty
exploration and serendipitous exploration.
[0091] It has been described how the music genogram may be used to
make a side step from the playlist generated from the mood
trajectory. It is to be understood that the master music track for
the music exploration does not necessarily come from the playlist.
It is possible to explore music using music genograms starting from
any music track.
[0092] The hybrid of the unique features of the described mood
trajectory, structured personalization and music genogram enables
users to grow to higher levels on the scale of indifferent to
casual to enthusiastic to savant.
[0093] An example of a hybrid usage is given in the following
sequence of steps.
[0094] First a mood trajectory is selected by either building an
own trajectory, following one of the recommended trajectories,
selecting trajectories from the expert library or following the
trajectories inputted from the shared community.
[0095] Next, options are explored and selected to refine the
trajectory. The options include pre-selecting and loading music
tracks per artist/genre on the emotional wheel, adding differential
weights along the selective emotions of the trajectory and
assigning a probabilistic distribution on personalized, favorite,
rated and/or banned songs and/or types of genograms (e.g. 2
stanzas, 3 stanzas, instrumental) and/or favorite music genograms
and/or incompleted music genograms for the music tracks to get
populated.
[0096] Next, the user listens to the music tracks along the emotion
trajectory on the emotion wheel.
[0097] The user is able to manually tag each of the music tracks
populated in the playlist algorithm of the trajectory by banning,
favorite and/or rate a music track. The user is able to personalize
each of the music tracks populated in the playlist algorithm of the
trajectory. The user is able to see the music genogram of each of
the music tracks populated in the playlist algorithm of the
trajectory. Furthermore the user is able to discover the tags of
the music genogram by exploring type of the tag and the connecting
songs at the macro/micro tags.
[0098] If the user wants to immediately explore the visual
connection(s) as revealed from the music genogram of the master
music track, then the user is able to selectively/completely queue
the connecting song(s) in the playlist of the trajectory.
[0099] If the user wants to immediately explore the visual
connection(s) as revealed from a slave's music genogram, then the
user is able to selectively/completely queue the connecting song(s)
in the playlist of the trajectory. This logic can also be extended
to slave's slave's genogram in an infinitesimal loop as triggered
by interactive initiative of the user.
[0100] The user is able to favorite the music genogram. The user is
able to rate (like `thumbs up` or `thumbs down`) the connecting
node(s) of the music track generated in the playlist when they are
being played. The user is able to tag the music genogram for a
reminder. This feature is useful if the user has incompletely
explored the tags revealed in the music genogram and wants to
complete the in-completed discovery of the tags in the latter
time/event. The user is able to share the music genogram or only
selective tag(s) in the music genogram within the online
community.
[0101] The user typically follows a decision tree when creating
mood trajectories (typically non-real time) and exploring
individual media items such as music tracks (typically real
time).
[0102] FIG. 10 shows an example of a decomposition of how non-real
time user initiatives are mapped to discovering/exploring music.
Block 100 indicates the start of the non-real time exploration
initiative. Block 101 indicates the start of different trajectory
structures. Block 102 indicates building an own trajectory. Block
103 indicates using recommended trajectories. Block 104 indicates
using an expert pre-mapped library with pre-stored trajectories.
Block 105 indicates using a community induced trajectory. Block 106
indicates pre-selecting and loading music tracks per artist or
genre on the emotional wheel. Block 107 indicates adding emotion
weights on the different locations of the trajectory. Block 108
indicates assigning probabilistic distribution on selecting
personalized and/or type of genogram and/or incomplete and/or
completed genogram and/or favorite and/or ban music tracks to get
populated as a playlist. Block 109 indicates `following which type
of recommendation?`. Block 110 indicates a link to a real time
initiative as shown in FIG. 11.
[0103] FIG. 11 shows an example of a decomposition of a real-time
initiative per music track on the emotional trajectory. Block 200
indicates the start of the real time initiative. Block 201
indicates explicit tagging. Block 202 indicates a favorite action.
Block 203 indicates a rating action. Block 204 indicates a ban
action. Block 205 indicates a skip action. Block 206 indicates a
personalization initiative. Blocks 207 indicate an addition. Block
208 indicates emotion coordinates on the trajectory, either exact
or near exact. Block 209 indicates primary/secondary emotion on the
first box. Block 210 indicates a comparison. Block 211 indicates
`match?`. Block 212 indicates the result `close`. Block 213
indicates the result `in-between`. Block 214 indicates the result
`far`. Block 215 indicates original trajectory completion. Block
216 indicates fast track. Block 217 indicates full track. Block 218
indicates continual track. Block 219 indicates disturbed track.
Block 220 indicates a favored shock. Block 221 indicates an
unfavored shock. Block 222 indicates favored shock as discovery
initiative. Block 223 indicates a genogram initiative. Block 224
indicates option 1. Block 225 indicates option 2. Block 226
indicates option 3. Block 227 indicates master-slave. Block 228
indicates master-slave-slave . . . n. Block 229 indicates
quantitative metrics. Block 230 indicates qualitative metrics.
Block 231 indicates completion. Block 232 indicates in-completion.
Block 233 indicates new genograms. Block 234 indicates existing
genograms. Block 235 indicates similarity. Block 236 indicates
novelty. Block 237 indicates serendipitous. Block 238 indicates on
connection. Block 239 indicates community sharing. Block 240
indicates subtle thumbs up or down. Block 241 indicates obvious
thumbs up or down.
[0104] Each of the non-real time and real time initiatives for
music discovery/exploration as shown in FIG. 10 and FIG. 11 may be
allocated in terms of coordinates on the emotion wheel to enable
statistical reports and recommendations. Traditional data
regression modeling techniques can be deployed per music track
populated in a quadrant of the emotion wheel. These techniques thus
map a music track as an input to the respective emotion coordinates
and respective extent of the discovery initiatives. Differential
weights are assigned on different music discovery initiatives
mapped in the dual effort of real time and non-real time. For
example, within non-real time initiatives building a trajectory
receives more weight on music discovery potential than when
following recommended trajectories. A similar logic of assigning
differential weights can also be extended to gauze the music
discovery potential in real time.
[0105] FIG. 12 shows an example of a mapping of real time and
non-real time initiatives on the emotional wheel. A first cluster
71 of music discovery initiatives represents personalization
initiatives of the user. A second cluster 72 of music discovery
initiatives represents music genogram discoveries of the user. A
third cluster 73 of music discovery initiatives represents the
continual trajectory.
[0106] For the first quadrant of the emotional wheel of FIG. 12 an
intuitive expression can be aggregated by meticulously studying the
varied initiatives to music discoveries. The expression for the
personalized music style of the user in the first quadrant is as
follows: personalized music style in first quadrant=% coverage on
emotion wheel to personalization initiative (PI)+% coverage on
emotion wheel to music genogram discovery initiative (GDI)+%
coverage on the emotional wheel to continued trajectory (CT). For
example, the personalized music style in the first quadrant=20%
PI+55% GDI+50% CT. Note that the coverage of the clusters may
overlap over the different sets of music initiatives and may
therefore not add to 100%. A radical change in the number of
attempt(s) in the same quadrant and on the same reference of the
loaded data may be notified to the user.
[0107] The expression may be expanded or optimized to cover more
quadrants on the emotional wheel and/or include other music
discovery initiatives. The following examples show four alternative
clusters called `what inspires me?`, `what is working for me?`,
`what are the possibilities for me?` and `what is missing for me?`.
It is to be understood that any other cluster may be defined.
[0108] The cluster `what inspires me?` is for example a cluster on:
personalized music tracks with positive emotions; music genogram
initiatives involving types of music genogram, types of music
genogram tags and option among the three methods to exploring a
genogram; music tracks/artists following the above habits; and high
score on cumulative initiative index on discovering music, either
real time or non-real time.
[0109] The cluster `what is working for me?` is for example a
cluster on: trajectories with all favorable music shocks; types of
genograms; option among the three methods to exploring a genogram;
and music tracks/artists following the above habits.
[0110] The cluster `what are the possibilities for me?` is for
example a cluster on: trajectories with no music shocks; music
tracks/artists following the above habits; and little tried
option(s) among the three methods to exploring a genogram.
[0111] The cluster `what is missing for me?` is for example a
cluster on: trajectories which are never or very little followed;
music tracks/artists following the above habits; unattempted
option(s) among the three methods to exploring genogram; and low
score on cumulative initiative index on discovering music, either
in real time or non-real time.
[0112] The described hybrid architecture takes care of tangible
(incremental or radical) change in user's growing music
understanding. It features dynamically revising the perception
about music along the basic emotions as well as combinations of
emotions and opposites of basic emotions. It measures and expresses
perception change of the user on rating the songs on a micro levels
of emotions such as carry-over beliefs and current beliefs. It
supplements users with an intuitive user interface for instant
gratification on micro elements used in building the play listing.
It offers the feature of simultaneously generating personalized
options with choosing personalized solutions. It captures learning
potential and rate of change learning potential of the listener in
real time. It captures learning potential and rate of change
learning potential of the listener in the experimenting time. It
gives an individualized expression on the music listening style
capturing extrinsic and intrinsic habits related to personal traits
and related to music discoveries. An aggregate metric on one's
music-style is highly desirable since a changed music-style
expression shall reflect a change in the personal expression and
forces one to think about his/her listening habit/style. It
optimizes for personalized recommendations whilst offering
possibilities to fully discover expert recommendations. This
combined aspect of playlist recommendation is highly desirable. It
gives an option to track partially discovered/explored songs. It
generates recommendation adapting to the universal music styles. It
fully monitors the reasons to following a continuum (music tracks
being played without disturbances) and shocks on the songs of the
playlist. It evaluates the songs populated in the playlist to
create positive experiences on music discoveries at macro and/or
micro levels of a music track.
[0113] In the foregoing examples are given of playlists containing
music tracks. It is to be understood that the invention is not
limited to playlists containing music tracks. Any media item can be
included in the playlist, such as a music track, a video or a
picture. A playlist may contain music items, videos or pictures
only. Alternatively a playlist contains a mixture of music items,
videos and/or pictures.
[0114] FIG. 13 shows an example of steps performed in a
computer-implemented method for creating a playlist. In step 1001a
graphical representation of an emotional wheel is displayed. In
step 1002 a first input is received indicating a starting point of
a mood trajectory in the emotional wheel. In step 1003 a second
input is received indicating an end point of the mood trajectory in
the emotional wheel. In step 1004 the mood trajectory is defined by
connecting the starting point to the end point via one or more
intermediate points. In step 1005 a graphical representation of the
mood trajectory is displayed in the graphical representation of the
emotional wheel. In step 1006 the media items are selected by
searching in the meta-data for emotion characteristics or mood
characteristics that match the initial mood, the intermediate moods
and the destination mood, respectively. In step 1007 the playlist
of media items is created in an order from initial mood to
destination mood.
[0115] FIG. 14 shows an example of steps performed in a
computer-implemented method for creating personalized meta-data for
a media item. In step 1011 emotion data is received indicative of a
primary emotion or mood or a secondary emotion or mood experienced
or felt by a user when listening to or watching the media item. In
step 1012 description data is received indicative of a situation
indicated by the user to be related to the media item. In step 1013
the emotion data and the description data are stored in the
meta-data. In optional step 1014 time data is received indicative
of a moment in time indicated by the user to be related to the
media item. In step 1015 the time date is stored in the
meta-data.
[0116] FIG. 15 shows an example of steps performed in a
computer-implemented method for exploring music tracks. In step
1021a graphical representation is displayed of a first music
genogram of a first music track. In step 1022 a first exploration
input is received indicating a selection of one of the tags in one
of the sections. If the first exploration input indicates a
pro-tag, then in step 1023 a link is displayed to a second music
genogram of a second music track. If the pro-tag comprises two or
more micro-pro tags, then in step 1024 a graphical representation
is displayed of the decomposition of the pro-tag and a link to a
third music genogram of a third music track for each of the
micro-pro tags. The decision is taken in step 1025.
[0117] FIG. 16 shows an example of steps performed in a
computer-implemented method for enabling personalized media items
recommendations. In step 1031 real time and/or non-real time
initiatives data is collected. In step 1032 the real time and/or
non-real time initiatives data is mapped on an emotional wheel. In
step 1033 an intuitive expression is aggregated to define a
personalized music style by meticulously analyzing the real time
and/or non-real time initiatives data.
[0118] It is to be understood that the order of the steps shown in
FIG. 13, FIG. 14, FIG. 15 and FIG. 16 can be different than
shown.
[0119] It is to be understood that any feature described in
relation to any one embodiment may be used alone, or in combination
with other features described, and may also be used in combination
with one or more features of any other of the embodiments, or any
combination of any other of the embodiments. One embodiment of the
invention may be implemented as a program product for use with a
computer system. The program(s) of the program product define
functions of the embodiments (including the methods described
herein) and can be contained on a variety of computer-readable
storage media. Illustrative computer-readable storage media
include, but are not limited to: (i) non-writable storage media
(e.g., read-only memory devices within a computer such as CD-ROM
disks readable by a CD-ROM drive, ROM chips or any type of
solid-state non-volatile semiconductor memory) on which information
is permanently stored; and (ii) writable storage media (e.g.,
floppy disks within a diskette drive or hard-disk drive or any type
of solid-state random-access semiconductor memory or flash memory)
on which alterable information is stored. Moreover, the invention
is not limited to the embodiments described above, which may be
varied within the scope of the accompanying claims.
* * * * *