U.S. patent application number 13/264189 was filed with the patent office on 2014-05-08 for method and system for generating a sound effect in a piece of game software.
The applicant listed for this patent is Olivier Gillet, Elhad Piesczek-Ali. Invention is credited to Olivier Gillet, Elhad Piesczek-Ali.
Application Number | 20140128160 13/264189 |
Document ID | / |
Family ID | 45558781 |
Filed Date | 2014-05-08 |
United States Patent
Application |
20140128160 |
Kind Code |
A1 |
Gillet; Olivier ; et
al. |
May 8, 2014 |
METHOD AND SYSTEM FOR GENERATING A SOUND EFFECT IN A PIECE OF GAME
SOFTWARE
Abstract
Disclosed is a method and system for generating a sound effect
in a piece of game software. In response to a request for emission
of a sound effect from the game software, transmission of audio
data is performed, where the transmission represents a sound effect
to a sound reproduction device. Further, audio data, referred to as
ambient music, representing music in the course of reproduction is
analyzed in order to determine at least one characteristic (BEAT,
GENRE, KEY) of the ambient music. At least one characteristic of
the transmission is defined from the at least one characteristic
(BEAT, GENRE, KEY) of the ambient music.
Inventors: |
Gillet; Olivier; (Paris,
FR) ; Piesczek-Ali; Elhad; (Paris, FR) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Gillet; Olivier
Piesczek-Ali; Elhad |
Paris
Paris |
|
FR
FR |
|
|
Family ID: |
45558781 |
Appl. No.: |
13/264189 |
Filed: |
October 12, 2011 |
PCT Filed: |
October 12, 2011 |
PCT NO: |
PCT/US11/55992 |
371 Date: |
October 13, 2011 |
Current U.S.
Class: |
463/35 |
Current CPC
Class: |
A63F 2300/69 20130101;
A63F 13/22 20140902; A63F 2300/1081 20130101; G10H 2210/076
20130101; G10H 2210/031 20130101; G10H 2210/071 20130101; A63F
2300/6081 20130101; A63F 13/215 20140902; A63F 13/54 20140902; G10H
1/0025 20130101; G10H 1/42 20130101; G10H 2210/141 20130101; A63F
13/44 20140902; G10H 2240/081 20130101; G10H 2240/085 20130101;
G10H 2210/026 20130101; G10H 1/383 20130101 |
Class at
Publication: |
463/35 |
International
Class: |
A63F 9/24 20060101
A63F009/24 |
Foreign Application Data
Date |
Code |
Application Number |
Apr 12, 2011 |
FR |
11/53,197 |
Claims
1. A method comprising: in response to a request for emission of a
sound effect from game software, accessing, via a sound
reproduction device, audio data representing the sound effect;
analyzing, via the sound reproduction device, ambient music from
the game software in order to determine at least one characteristic
of the ambient music, said ambient music comprising audio data
representing music in a course of reproduction of the game
software; defining, via the sound reproduction device, at least one
characteristic of the sound effect based on the at least one
characteristic of the ambient music.
2. The method of claim 1, further comprising: analyzing the audio
data of said ambient music for determining instants at which the
ambient music has a rhythmic beat; determining the at least one
characteristic of the ambient music based on the instants of the
rhythmic beat; and defining an instant from the instants at which
the ambient music has the rhythmic beat in order to determine the
at least one characteristic of the sound effect in accordance with
the at least one characteristic of the ambient music.
3. The method of claim 2, further comprising: defining the instant
as an instant that follows a last instant at which the music has a
rhythmic beat, said defining comprises identifying the instant by
an integer number of times equal to an average time interval
separating the instants at which the music has a rhythmic beat,
said defining facilitates determining the instant from the instants
at which the music has the rhythmic beat.
4. The method of claim 3, further comprising: analyzing the audio
data representing the ambient music in order to determine a musical
genre for the ambient music, said determination comprises
determining the at least one characteristic of the ambient music;
and selecting, from the audio data associated with different
musical genres of ambient music, audio data associated with at
least one of said different genres of the ambient music in order to
define the at least one characteristic of the sound effect in
accordance with the at least one characteristic of the ambient
music, wherein the selected audio data corresponds to the selected
audio data.
5. The method of claim 4, further comprising: analyzing the audio
data representing the ambient music for determining a key for the
ambient music in order to analyze the audio data representing the
ambient music, said analyzing comprises determining the at least
one characteristic of the ambient music; and determining a desired
pitch from the determined key in order to determine the at least
one characteristic of the sound effect in accordance with the at
least one characteristic of the ambient music.
6. The method of claim 5, further comprising: analyzing the audio
data representing the ambient music to determine a bass line and a
melody line for the ambient music in order to analyze the audio
data representing the ambient music and for determining the key for
the ambient music; and determining the key of the ambient music
from the bass line and the melody line.
7. The method of claim 6, further comprising: recovering audio data
representing a sound effect having a certain pitch; and modifying
the recovered audio data so that the sound effect has the certain
pitch, wherein the modified audio data corresponds to the selected
audio data.
8. The method of claim 3, further comprising: determining
parameters of a software synthesizer from the at least one
characteristic of the ambient music and defined relationships
between the ambient music and the sound effect; and implementing
the software synthesizer with the determined parameters so that it
synthesizes sound effect audio data, wherein the accessed audio
data corresponds to the synthesized audio data.
9. A computer-readable storage medium tangibly encoded with
computer-executable instructions, that when executed by a computing
device, perform a method comprising: in response to a request for
emission of a sound effect from game software, accessing, via a
sound reproduction device, audio data representing the sound
effect; analyzing, via the sound reproduction device, ambient music
from the game software in order to determine at least one
characteristic of the ambient music, said ambient music comprising
audio data representing music in a course of reproduction of the
game software; defining, via the sound reproduction device, at
least one characteristic of the sound effect based on the at least
one characteristic of the ambient music.
10. A data processing system comprising: a sound reproduction
devices; a storage device on which a computer program comprising
computer-executable instructions are stored; a central processing
unit for executing the computer-executable instructions stored at
the storage device, where upon execution, the central processing
unit performs a method comprising: in response to a request for
emission of a sound effect from game software, accessing, via a
sound reproduction device, audio data representing the sound
effect; analyzing, via the sound reproduction device, ambient music
from the game software in order to determine at least one
characteristic of the ambient music, said ambient music comprising
audio data representing music in a course of reproduction of the
game software; defining, via the sound reproduction device, at
least one characteristic of the sound effect based on the at least
one characteristic of the ambient music.
Description
FIELD
[0001] The present disclosure relates to a method and system for
generating a sound effect in a piece of game software, and in
particular for synchronizing the sound effects of a video game to
background music as a substitution to the original game music.
BACKGROUND
[0002] Many video game players prefer to play music from their own
collection instead of the original background score authored for
the game. As a result, they may switch off the game's original
sound effects, which may be perceived as unwanted or even
annoying.
SUMMARY
[0003] The present disclosure relates to adjusting the sound
effects of a video game in such a way that they blend perfectly
with whatever piece of music the user has decided to play as a
substitution to the original game music. The aim of the disclosure
is to allow satisfactory immersion in the game, even when a user is
using his own ambient music, by encouraging the user to keep the
sound effects provided.
[0004] According to some embodiments, the present disclosure
discusses a method for generating a sound effect in a piece of game
software. The method includes accessing audio data representing a
sound effect from a sound reproduction device in response to a
request for emission of a sound effect from the game software. The
method analyzes audio data representing music in the course of
reproduction, referred to as ambient music, in order to determine
at least one characteristic of the ambient music. The method then
defines at least one characteristic of the transmission from the at
least one characteristic of the ambient music.
[0005] According to some embodiments, the method includes analyzing
the audio data representing the ambient music in order to determine
instants at which the ambient music has a rhythmic beat in order to
analyze audio data representing the ambient music for determining
the at least one characteristic of the ambient music. The method
then defines an instant at which the transmission starts from the
instants at which the ambient music has a rhythmic beat in order to
determine the at least one characteristic of the transmission from
the at least one characteristic of the ambient music.
[0006] According to some embodiments, the method includes defining
as the instant at which the transmission starts, an instant that
follows the last instant at which the music has a rhythmic beat in
order to determine the instant at which the transmission starts
from the instants at which the music has a rhythmic beat. The
instant is defined by an integer number multiplied by the average
time interval separating the instants at which the music has a
rhythmic beat. According to some embodiments, it is preferable that
this be once the average time interval.
[0007] According to some embodiments, the method includes analyzing
the audio data representing the ambient music in order to determine
a musical genre for the ambient music in order to analyze the audio
data representing the ambient music in order to determine the at
least one characteristic of the ambient music. The method then
includes selecting, from among several audio data associated with
different musical genres, the audio data which is associated with
the genre of the ambient music, where the audio data for the
transmission stem is from the selected audio data in order to
define the at least one characteristic of the transmission from the
at least one characteristic of the ambient music.
[0008] According to some embodiments, the method includes analyzing
the audio data representing the ambient music in order to determine
a key for the ambient music in order to analyze the audio data
representing the ambient music for determining the at least one
characteristic of the ambient music. The method then determines a
desired pitch from the determined key in order to determine the at
least one characteristic of the transmission from the at least one
characteristic of the ambient music.
[0009] According to some embodiments, the method includes analyzing
the audio data representing the ambient music in order to determine
a bass line and a melody line for the ambient music. The analyzing
step is also performed in order to analyze the audio data
representing the ambient music in order to determine a key for the
ambient music. The method also includes determining the key of the
ambient music from the bass line and the melody line that have been
determined.
[0010] According to some embodiments, the method further includes
recovering audio data representing a sound effect having a certain
pitch, modifying the recovered audio data so that the sound effect
that they represent has the desired pitch, in that the audio data
of the transmission stem from the audio data that have been
modified in this manner.
[0011] According to some embodiments, the method further includes
determining parameters of a software synthesizer from, firstly, the
at least one characteristic of the ambient music and, secondly,
from defined relationships. The method includes implementing the
software synthesizer with the determined parameters so that it
synthesizes sound effect audio data, in that the audio data of the
transmission stem from the audio data that have been synthesized in
this manner.
[0012] In another embodiment, a computer-readable storage medium is
disclosed for generating a sound effect in a piece of game
software.
[0013] In yet another embodiment, a system is disclosed for
generating a sound effect in a piece of game software. The system
includes a data processing system which includes a sound
reproduction device, a storage device on which a computer program
has been saved, and a central processing unit for executing the
instructions of the computer program.
[0014] These and other aspects and embodiments will be apparent to
those of ordinary skill in the art by reference to the following
detailed description and the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] In the drawing figures, which are not to scale, and where
like reference numerals indicate like elements throughout the
several views:
[0016] FIG. 1 is a block diagram of a data processing system in
accordance with an embodiment of the present disclosure;
[0017] FIG. 2 is a block diagram illustrating instruction blocks in
a piece of game software implemented by the data processing system
of FIG. 1 in accordance with an embodiment of the present
disclosure;
[0018] FIG. 3 illustrates a flow chart for generating a sound
effect in accordance an embodiment of the present disclosure;
[0019] FIG. 4 is a block diagram illustrating an internal
architecture of a computing device in accordance with an embodiment
of the present disclosure.
DESCRIPTION OF EMBODIMENTS
[0020] Embodiments are now discussed in more detail referring to
the drawings that accompany the present application. In the
accompanying drawings, like and/or corresponding elements are
referred to by like reference numbers.
[0021] Various embodiments are disclosed herein; however, it is to
be understood that the disclosed embodiments are merely
illustrative of the disclosure that can be embodied in various
forms. In addition, each of the examples given in connection with
the various embodiments is intended to be illustrative, and not
restrictive. Further, the figures are not necessarily to scale,
some features may be exaggerated to show details of particular
components (and any size, material and similar details shown in the
figures are intended to be illustrative and not restrictive).
Therefore, specific structural and functional details disclosed
herein are not to be interpreted as limiting, but merely as a
representative basis for teaching one skilled in the art to
variously employ the disclosed embodiments.
[0022] The present disclosure is described below with reference to
block diagrams and operational illustrations of methods and
devices. It is understood that each block of the block diagrams or
operational illustrations, and combinations of blocks in the block
diagrams or operational illustrations, can be implemented by means
of analog or digital hardware and computer program instructions.
These computer program instructions can be provided to a processor
of a general purpose computer, special purpose computer, ASIC, or
other programmable data processing apparatus, such that the
instructions, which execute via the processor of the computer or
other programmable data processing apparatus, implement the
functions/acts specified in the block diagrams or operational block
or blocks.
[0023] In some alternate implementations, the functions/acts noted
in the blocks can occur out of the order noted in the operational
illustrations. For example, two blocks shown in succession can in
fact be executed substantially concurrently or the blocks can
sometimes be executed in the reverse order, depending upon the
functionality/acts involved. Furthermore, the embodiments of
methods presented and described as flowcharts in this disclosure
are provided by way of example in order to provide a more complete
understanding of the technology. The disclosed methods are not
limited to the operations and logical flow presented herein.
Alternative embodiments are contemplated in which the order of the
various operations is altered and in which sub-operations described
as being part of a larger operation are performed
independently.
[0024] The principles described herein may be embodied in many
different forms. The described systems and methods allow for
synchronizing the sound effects of a video game to background
music. The described systems and methods adjust the sound effects
in such a way that they blend perfectly with whichever piece of
music the player has decided to play as a substitution to the
original game music.
[0025] For the purposes of this disclosure the term "end user",
"user" or "player" should be understood to refer to a consumer of
data supplied by a data provider. By way of example, and not
limitation, the term "user" can refer to a person who receives data
provided by the data provider over the Internet in a browser
session, or can refer to an automated software application which
receives the data and stores or processes the data.
[0026] For the purposes of this disclosure, a computer readable
medium stores computer data, which data can include computer
program code that is executable by a computer, in machine readable
form. By way of example, and not limitation, a computer readable
medium may comprise computer readable storage media, for tangible
or fixed storage of data, or communication media for transient
interpretation of code-containing signals. Computer readable
storage media, as used herein, refers to physical or tangible
storage (as opposed to signals) and includes without limitation
volatile and non-volatile, removable and non-removable media
implemented in any method or technology for the tangible storage of
information such as computer-readable instructions, data
structures, program modules or other data. Computer readable
storage media includes, but is not limited to, RAM, ROM, EPROM,
EEPROM, flash memory or other solid state memory technology,
CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic
tape, magnetic disk storage or other magnetic storage devices, or
any other physical or material medium which can be used to tangibly
store the desired information or data or instructions and which can
be accessed by a computer or processor.
[0027] For the purposes of this disclosure a module is a software,
hardware, or firmware (or combinations thereof) system, process or
functionality, or component thereof, that performs or facilitates
the processes, features, and/or functions described herein (with or
without human interaction or augmentation). A module can include
sub-modules. Software components of a module may be stored on a
computer readable medium. Modules may be integral to one or more
computers (or servers), or be loaded and executed by one or more
computers (or servers). One or more modules may be grouped into an
engine or an application. As discussed herein, a background music
analyzer, game sound effects analyzer and a sound effect scheduler
can be a module that is a software, hardware, or firmware (or
combinations thereof) system for automatically synchronizing game
sound effects with background music.
[0028] For the purposes of this disclosure the term "server" should
be understood to refer to a service point which provides
processing, database, and communication facilities. By way of
example, and not limitation, the term "server" can refer to a
single, physical processor with associated communications and data
storage and database facilities, or it can refer to a networked or
clustered complex of processors and associated network and storage
devices, as well as operating software and one or more database
systems and applications software which support the services
provided by the server.
[0029] As discussed herein, many users of game software prefer to
play music from their own music collection rather than the music
initially provided with the game software. By way of non-limiting
examples, there are several ways of replacing the music initially
provided in the game software with other ambient music via a
background music analyzer. By way of an example, at the game-level,
the game software may provide an option to use an ambient music
file (for example a file in mp3 format) from the user instead of
the ambient music initially provided. As a non-limiting variant, at
the system-level, users simply turn off the ambient music initially
provided to replace it with ambient music from a piece of software
other than the game software, generally a multimedia player such as
the software VLC or the software foobar2000. As a further
non-limiting variant, at room-level, users simply turn off the
ambient music initially provided to replace it with ambient music
from a source other than the data processing system executing the
game, for example a hi-fi system. Moreover, it has been noticed
that users also often turn off the sound effects provided in the
game software because they are perceived as disturbing the ambient
music which they have chosen. As a result, they are less immersed
in the game and the playing pleasure decreases. The background
music analyzer is a library integrated into a game responsible for
recording the music which is substituted to the original game
music, either through direct access to the audio file (at the
game-level), through OS-level interception of audio buffers (at the
system-level), or through direct recording with a microphone (at
the room-level).
[0030] According to some embodiments, as discussed herein, a
recorded signal can be split into overlapping frames, such as 100
ms frames. The following functions can be used to extract features
for each frame: (1) Beat detection function: a function showing
sharp peaks at beats; (2) Key detection function: indicating the
probability that the music has been, over a past period of time,
such as 20 s, in a specific tonality. According to some
embodiments, a predetermined number of the key detection functions
are computed for each minor and major tonalities. For example, 24
of the key detection functions are computed for each of the 12
minor and major tonalities. The beat detection function is computed
by a periodicity estimation and tracking of onset detection. The
key detection function is computed by matching a bass and melody
chromagram with note distribution templates computed for each
scale. The chromogram is obtained by binning the frequency spectrum
into a number of bins (e.g., 12 bins) mapped to a number of tones
(e.g., 12 tones) of equal temperament scale; or by encoding into a
number of pitch classes (e.g., 12 pitch classes) the output of a
multi-pitch estimator. Additional genre information can be
extracted through the use of standard machine learning techniques,
such as but not limited to, SVM or Bayesian classifier using
mixtures of Gaussian distributions trained on annotated audio
files.
[0031] As discussed herein, at least in view of the above
discussion of the background music analyzer, a game sound effects
analyzer analyzes each of the sound effects samples used in the
game to detect their fundamental frequency, using an algorithm such
as YIN. It is either used during the game development process, in
which all the sound effect samples produced for the game can be
annotated with their pitch, or embedded in the game, in which the
analysis can be performed every time the game is launched. In the
situation the analysis is part of the game asset preparation
procedure, different sound effects can also be annotated with a
specific music genre, or different sets of sound effects can be
created that match different music genres. For example, the
destruction of an enemy in a game can be sonified by a synthesizer
sound in the "electro" sample set, and a brass hit in the "soul"
sample set.
[0032] As discussed herein, at least in view of the above
discussion of the background music analyzer and game sound effects
analyzer, a sound effect scheduler can be embedded in the game and
may be responsible for the playback of the game sound effects. It
can operate in two modes. In a normal operating mode, the samples
are played at their original pitch immediately after the moment the
action that triggers them has taken place. In a music-synchronous
mode, the sound effect scheduler queries the background music
analyzer to retrieve the times at which the past number of beats
(e.g., 4 beats) have been played in the background music, and the
most probable tonality of the background music. The position in
time of the past number of beats (e.g., 4 beats) can be used to
anticipate the time at which the next beat will occur. Every time
the player initiates in or during the game an action that triggers
a sound effect, the sound effect is not played instantly, but
instead, it is delayed so that its playback will coincide with the
next beat in the music. Additionally, the difference in pitch
between the original sound effect sample (as computed by the sound
effect analyzer) and the tonality of the music is compensated for,
using transpositions methods such as sample rate conversion or
pitch-shifting. In the situation where the game sound effects bank
has been annotated by genre, the genre information returned by the
analysis module can be used to restrain further the set of sound
effects played back.
[0033] Certain embodiments will now be discussed in greater detail
with reference to the figures. In general, with reference to FIG.
1, a data processing system 100 in accordance with an embodiment
for synchronizing sound effects of a video game with background
music is shown. The data processing system 100 includes a central
unit 102 which contains a central processing unit 104, such as a
microprocessor, and a storage device 106, such as a hard disk. The
data processing system 100 has a man/machine interface 108
comprising input devices, such as for example a keyboard 110 and a
mouse 112, and output devices, such as for example a display screen
114 and a sound reproduction device 118, 120. By way of example,
the sound reproduction device can be comprised of a sound card 118
arranged in the central unit 102 and speakers 120 connected to the
sound card 118.
[0034] The data processing system 100 includes a sound capture
device 122, such as a microphone connected to the sound card 118.
The sound capture device 122 is designed to capture a musical
source 114 which can be external 124 to the data processing system
100. A non-limiting example of an external musical source 124 is a
hi-fi system.
[0035] It is to be understood that the present disclosure may be
implemented utilizing any number of computer technologies. For
example, although certain embodiments relate to providing access to
game software and ambient music via a computing device, the
disclosure may be utilized over any computer network, including,
for example, a wide area network, local area network or, corporate
intranet. Similarly, a computing device discussed in the data
processing system 100 may be any computing device that may be
coupled to a network, including, for example, personal digital
assistants, Web-enabled cellular telephones, devices that dial into
the network, mobile computers, personal computers, Internet
appliances, wireless communication devices, game consoles and the
like. Computing devices in data processing system 100 include a
program for interfacing with the network. Such program, as
understood in the art, can be a window or browser, or other similar
graphical user interface, for visually displaying the game to the
end user (or player) on the display 114 of the computing device.
Furthermore, servers for providing game software and/or ambient
music external to the game software may be of any type, running any
software, and the software modules, objects or plug-ins may be
written in any suitable programming language.
[0036] FIG. 2 illustrates instruction blocks in a piece of game
software implemented by the data processing system 100 of FIG. 1 in
accordance with some embodiments of the present disclosure. In FIG.
2, audio data FX.sub.A, FX.sub.B and FX.sub.C are saved in the
storage device 106 of the data processing system of FIG. 1. The
audio data FX.sub.A, FX.sub.B or FX.sub.C represent a sound effect
and are associated with respective musical genres G.sub.A, G.sub.B
and G.sub.C. A piece of game software 200 allowing a user to play a
game is likewise saved in the storage device 106.
[0037] The game software 200 includes game instructions 202 which
are designed to supply game information to a user through the
output devices of the man/machine interface 108, in that the game
information evolves on the basis of commands input by a user using
the input devices (e.g., 110, 112) of the man/machine interface
108. The game instructions 202 are designed to send a request R for
emission of a sound effect when the game is being executed. By way
of example, the request R is sent upon every action in the game
which is performed by the user using the input devices of the
man/machine interface 108, in that said action is associated with a
sound effect, as discussed below.
[0038] The game software 200 includes sound effect analysis
instructions 204. The sound effect analysis instructions 204 are
designed to analyze each saved instance of audio data FX.sub.A,
FX.sub.B and FX.sub.C and to determine the pitch P.sub.A, P.sub.B
and P.sub.C thereof. According to some exemplary embodiments, the
pitch corresponds to a fundamental frequency for the audio data, as
determined by means of, for example, a YIN algorithm. The sound
effect analysis instructions 204 are furthermore designed to create
associations between the audio data FX.sub.A, FX.sub.B or FX.sub.C
and the respective pitch P.sub.A, P.sub.B or P.sub.C thereof. That
is, a pitch value P.sub.A, P.sub.B or P.sub.C are determined from
the audio samples FX.sub.A, FX.sub.B or FX.sub.C respectively, and
this determination is taken into account for assigning a pitch
value to the sound effects.
[0039] The game software 200 includes instructions 206 for
analyzing a piece of music in the course of reproduction either by
the reproduction device 118, 120 or by the external reproduction
device 124. This music is referred to as ambient music. The ambient
music analysis instructions 206 are designed to recover audio data
MUS representing the ambient music. In a first case of replacing
ambient music, for example, the ambient music analysis instructions
206 are designed to directly access the music file indicated by the
user in the game software options. The game software options can be
a dialog box, window, menu or any other graphical user interface
element through which the user can configure aspects of the game,
such as, input controls, sound volume, music selection, etc. In a
second case of replacing ambient music, for example, the ambient
music analysis instructions 206 are designed to intercept the audio
buffers of an operating system running on the data processing
system 100 and executing the game software. In a third case of
replacing ambient music, for example, the ambient music analysis
instructions 206 are designed to use the sound capture device 122
to convert the ambient music into the audio data MUS.
[0040] The ambient music analysis instructions 206 are designed to
analyze the audio data MUS in order to determine at least one
characteristic of the ambient music. More precisely, in an example,
three characteristics of the ambient music are determined. Thus,
the ambient music analysis instructions 206 are designed to analyze
the audio data MUS in order to determine instants, denoted as BEAT
in FIG. 2, at which the ambient music has a rhythmic beat. The
ambient music analysis instructions 206 are also designed to
analyze the audio data MUS in order to determine a musical genre,
denoted GENRE in FIG. 2, for the ambient music. The ambient music
analysis instructions 206 are also designed to analyze the audio
data MUS in order to determine a key, denoted KEY in FIG. 2, for
the ambient music. A key is defined as the set of a tonic and a
mode. By way of example, the tonic is one of the twelve notes in
the classical scale (C, C sharp, D, D sharp, E, F, F sharp, G, G
sharp, A, A sharp, B), and the mode is chosen from among the
harmonic major mode and the harmonic minor mode. there are thus
twenty-four possible keys. To perform the analysis, for example,
the ambient music analysis instructions 206 are designed to analyze
the audio data MUS in order to determine a bass line and a melody
line for the ambient music. From this, the key of the music from
the bass line and the melody line is determined.
[0041] The game software 200 has sound effect generation
instructions 208. This coincides with the sound effects scheduler
discussed above. The sound effect generation instructions 208 are
designed to, in response to the sending of the request R, define at
least one characteristic for an audio data transmission, which are
denoted FX in FIG. 2 representing a sound effect, to the
reproduction device 118, 120. This at least one transmission
characteristic is determined from the at least one ambient music
characteristic determined by the ambient music analysis
instructions 204. More precisely, according to some embodiments,
and by way of a non-limiting example, the sound effect generation
instructions 208 are designed to define three transmission
characteristics from, respectively, the three ambient music
characteristics: BEAT, GENRE and KEY. Thus, the sound effect
generation instructions 208 are designed to define an instant
T.sub.0 at which the transmission starts from the instants BEAT, at
which the ambient music has a rhythmic beat. By way of example, the
sound effect generation instructions 208 are designed to define
this instant T as following the last rhythmic beat instant by a
time interval equal to an integer number of times the average time
interval separating the rhythmic beat instants. According to some
embodiments, transmission occurs once this average time
interval.
[0042] Furthermore, the sound effect generation instructions 208
are designed to select, from among the default audio data FX.sub.A,
FX.sub.B and FX.sub.C, those which are associated with the musical
genre GENRE of the ambient music, as provided by the instructions
204. The selected default audio data will subsequently be denoted
FX.sub.i and the pitch thereof P.sub.i. Furthermore, the sound
effect generation instructions 208 are designed to determine a
desired pitch P from the key KEY of the ambient music MUS as
provided by the instructions 204. Preferably, according to some
embodiments, the desired pitch P is the tonic or the fifth of the
key KEY. The sound effect generation instructions 208 are designed
to recover the selected default audio data FX.sub.i which, as
indicated previously, have a default pitch P.sub.i.
[0043] The sound effect generation instructions 208 are designed to
modify the recovered default audio data FX.sub.i so that the sound
effect which they represent has the desired pitch P. The sound
effect generation instructions 208 are designed to define the
selected and modified audio data as audio data FX which represents
the desired sound effect. The sound effect generation instructions
208 are designed to implement the transmission having the
characteristics defined previously, that is to say: the instant
T.sub.0 at which transmission starts, the audio data FX stemming
from default audio data FX.sub.i corresponding to the genre of the
ambient music and having the desired pitch P.
[0044] Having discussed the functional and executable components
for generating a sound effect in a piece of game software, its
operation will now be described with reference to FIG. 3. FIG. 3 is
a flow chart showing the steps in a method 300 for generating a
sound effect, via the data processing system 100 in FIG. 1
executing the instructions of the game software in FIG. 2, in
accordance an embodiment of the present disclosure. In Step 302,
the data processing system 100 receives a request for execution of
the game software 200 from the user through the man/machine
interface 108. In Step 304, in response to reception of the
request, the data processing system 100 launches the game software
200. In Step 305 in which the game is initialized, the processing
unit 104 executing the sound effect analysis instructions 204
analyzes the audio data FX.sub.A, FX.sub.B and FX.sub.C, determines
the respective pitch P.sub.A, P.sub.B and P.sub.C thereof, in the
manner indicated with reference to FIG. 2, and creates associations
between the audio data FX.sub.A, FX.sub.B and FX.sub.C and the
respective pitch P.sub.A, P.sub.B, P.sub.C thereof.
[0045] In Step 306, the central processing unit 104 executing the
game instructions 202 supplies game information to the user through
the output devices (screen, sound reproduction device, etc.) of the
man/machine interface 108 on the basis of commands which are input
by the user using the input devices 110, 112 (keyboard, mouse,
etc.) of the man/machine interface 108. In parallel with Step 306,
as in Step 308, the processing unit 104 executing the ambient music
analysis instructions 204 recovers audio data MUS representing the
ambient music. Still in parallel with Step 306, in Step 310, the
processing unit 104 executing the ambient music analysis
instructions 204 analyzes the audio data MUS in order to determine
at least one characteristic of the ambient music, for example the
three characteristics BEAT, GENRE and KEY indicated previously.
[0046] In Step 316, the central processing unit 104 executing the
game instructions 202 receives a command from the user through the
input devices of the man/machine interface 108 in order to perform
an action in the game, where the action is associated with a sound
effect. In Step 318, in response to reception of the command from
the user, the central processing unit 104 executing the game
instructions 202 sends a request R for emission of a sound effect.
In Step 320, in response to the request R, the central processing
unit 104 executing the sound effect generation instructions 208
defines the three characteristics T, FX.sub.i and P on the basis
of, respectively, the three characteristics BEAT, GENRE and KEY of
the ambient music which were determined during step 310. In Step
322, the central processing unit 104 executing the sound effect
generation instructions 208 recovers the selected default audio
data FX.sub.i which, as indicated previously, represents a sound
effect having the default pitch P.sub.i. In Step 324, the central
processing unit 104 executing the sound effect generation
instructions 208 modifies the default audio data FX.sub.i so that
the sound effect which they represent changes from the pitch
P.sub.i to the desired pitch P. The audio data modified in this
manner are denoted FX. In Step 326, the central processing unit 104
executing the sound effect generation instructions 208 performs the
transmission at the instant T, with the audio data FX which,
firstly, represents a sound effect at the pitch P and, secondly,
stems from the audio data FX.sub.i selected in accordance with the
genre of the ambient music.
[0047] Thus, the generated sound effect is harmoniously
incorporated into the ambient music on several levels: on a
rhythmic level as a result of the transmission instant T.sub.0, on
a melodic level as a result of the pitch P.sub.0 of said sound
effect, and on a stylistic level as a result of the selection of
the audio data FX.sub.i that matched the genre of the ambient
music. The method 300 then returns to Steps 306 and 308.
[0048] FIG. 4 is a block diagram illustrating an internal
architecture of an example of a computing device, as discussed in
data processing system 100 of FIGS. 1-3, in accordance with one or
more embodiments of the present disclosure.
[0049] A computing device as referred to herein refers to any
device with a processor capable of executing logic or coded
instructions, and could be, as understood in context, a server,
personal computer, game console, set top box, smart phone,
pad/tablet computer or media device, to name a few such
devices.
[0050] As shown in the example of FIG. 4, internal architecture 400
includes one or more processing units (also referred to herein as
CPUs) 412, which interface with at least one computer bus 402. Also
interfacing with computer bus 402 are persistent storage
medium/media 406, network interface 414, memory 404, e.g., random
access memory (RAM), run-time transient memory, read only memory
(ROM), etc., media disk drive interface 408 as an interface for a
drive that can read and/or write to media including removable media
such as floppy, CD ROM, DVD, etc. media, display interface 410 as
interface for a monitor or other display device, keyboard interface
416 as interface for a keyboard, pointing device interface 418 as
an interface for a mouse or other pointing device, and
miscellaneous other interfaces not shown individually, such as
parallel and serial port interfaces, a universal serial bus (USB)
interface, and the like.
[0051] Memory 404 interfaces with computer bus 402 so as to provide
information stored in memory 404 to CPU 412 during execution of
software programs such as an operating system, application
programs, device drivers, and software modules that comprise
program code, and/or computer executable process steps,
incorporating functionality described herein, e.g., one or more of
process flows described herein. CPU 412 first loads computer
executable process steps from storage, e.g., memory 404, storage
medium/media 406, removable media drive, and/or other storage
device. CPU 412 can then execute the stored process steps in order
to execute the loaded computer-executable process steps. Stored
data, e.g., data stored by a storage device, can be accessed by CPU
412 during the execution of computer-executable process steps.
[0052] Persistent storage medium/media 406 is a computer readable
storage medium(s) that can be used to store software and data,
e.g., an operating system and one or more application programs.
Persistent storage medium/media 406 can also be used to store
device drivers, such as one or more of a digital camera driver,
monitor driver, printer driver, scanner driver, or other device
drivers, web pages, content files, playlists and other files.
Persistent storage medium/media 406 can further include program
modules and data files used to implement one or more embodiments of
the present disclosure.
[0053] Thus, from the above discussion, it is clear that a computer
program 200 and a method 300 as described above allow harmonious
incorporation of sound effects into any kind of ambient music
chosen by a user, or even predefined by the game software.
[0054] Those skilled in the art will recognize that the methods and
systems of the present disclosure may be implemented in many
manners and as such are not to be limited by the foregoing
exemplary embodiments and examples. In other words, functional
elements being performed by single or multiple components, in
various combinations of hardware and software or firmware, and
individual functions, may be distributed among software
applications at either the client or server or both.
[0055] Thus, for example the system can be composed of a games
console, of an input for music, of an input for introducing the
game into the console, the console being provided so as to
implement the whole of the method. The input for the music may be a
USB port or a digital disk reader.
[0056] In this regard, any number of the features of the different
embodiments described herein may be combined into single or
multiple embodiments, and alternate embodiments having fewer than,
or more than, all of the features described herein are possible.
Functionality may also be, in whole or in part, distributed among
multiple components, in manners now known or to become known. Thus,
myriad software/hardware/firmware combinations are possible in
achieving the functions, features, interfaces and preferences
described herein. Moreover, the scope of the present disclosure
covers conventionally known manners for carrying out the described
features and functions and interfaces, as well as those variations
and modifications that may be made to the hardware or software or
firmware components described herein as would be understood by
those skilled in the art now and hereafter
[0057] In particular, the saved instances of the sound effect audio
data could be associated with pitches outside of execution of the
game software, either automatically (with software analysis) during
development of the game or by the musicians or engineers of the
sound themselves. In this case, step 305 of the method 300 from
FIG. 3 may be unnecessary.
[0058] Furthermore, the sound effect audio data could be adapted
not only to suit the possible musical genres of the ambient music
but also to suit possible keys of the ambient music. For example,
the sound effect audio data could be adapted to suit the
twenty-four keys corresponding to the twelve possible tonics and to
the two possible modes as discussed above. Thus, each saved
instance of audio data would be associated, in addition to a genre,
with a tonic and with a mode. According to some embodiments, the
sound effect generation instructions 208 would be designed to
select, from among the default audio data, those which are
associated not only with the musical genre of the ambient music but
also with the key thereof. Step 322 of the method in FIG. 3 would
be adapted as a result. Furthermore, it would no longer be
necessary to analyze the sound effect audio data in order to
determine the pitch thereof, nor to modify them in order to
transpose said pitch, so that steps 305 and 324 of the method in
FIG. 3 may be unnecessary.
[0059] Furthermore, the sound effect generation instructions 208
could be designed to synthesize the sound effect, that is to
provide the audio data corresponding to said sound effect on the
basis of sound synthesis taking account of the characteristics of
the ambient music which are determined by the means 206,
particularly the characteristics KEY, GENRE and BEAT. There would
thus no longer be any need for sound effects to be saved nor for
the analysis means 204 illustrated in FIG. 2. By way of example,
the sound synthesis could comprise, firstly, a software synthesizer
having a certain number of modifiable parameters (for example the
fundamental frequency or the waveform from an oscillator, or else
the cutoff frequency of a filter) and, secondly, a set of
relationships, defined by mathematical expressions, between the
parameters of the software synthesizer and the characteristics of
the ambient music. Thus, steps 322 and 324 of the method in FIG. 3
would be replaced by a step involving determination of the
parameters of the software synthesizer from, firstly, the
characteristics of the ambient music KEY and GENRE and, secondly,
the defined relationships, and via a step involving implementation
of the software synthesizer with the determined parameters so that
it synthesizes sound effect audio data.
[0060] While the system and method have been described in terms of
one or more embodiments, it is to be understood that the disclosure
need not be limited to the disclosed embodiments. It is intended to
cover various modifications and similar arrangements included
within the spirit and scope of the claims, the scope of which
should be accorded the broadest interpretation so as to encompass
all such modifications and similar structures.
* * * * *