U.S. patent application number 15/711139 was filed with the patent office on 2018-01-11 for mobile communication terminal and method thereof.
The applicant listed for this patent is Core Wireless Licensing S.a.r.l.. Invention is credited to Jarmo KAUKO.
Application Number | 20180013877 15/711139 |
Document ID | / |
Family ID | 39321401 |
Filed Date | 2018-01-11 |
United States Patent
Application |
20180013877 |
Kind Code |
A1 |
KAUKO; Jarmo |
January 11, 2018 |
MOBILE COMMUNICATION TERMINAL AND METHOD THEREOF
Abstract
A method for providing a user interface of a communication
apparatus comprises switching from a low power mode to a working
mode upon receiving a stream of audio data; and upon switching from
the low power mode to the working mode: extracting at least one
audio feature from said stream of audio data, and modifying the
appearance of at least one user interface component configured for
invoking a function of the communication apparatus, in accordance
with said extracted audio feature.
Inventors: |
KAUKO; Jarmo; (Tampere,
FI) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Core Wireless Licensing S.a.r.l. |
Luxembourg |
|
LU |
|
|
Family ID: |
39321401 |
Appl. No.: |
15/711139 |
Filed: |
September 21, 2017 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
14562450 |
Dec 5, 2014 |
|
|
|
15711139 |
|
|
|
|
11548443 |
Oct 11, 2006 |
8930002 |
|
|
14562450 |
|
|
|
|
Current U.S.
Class: |
1/1 |
Current CPC
Class: |
H04M 1/72544 20130101;
H04M 1/72583 20130101; G06F 3/04817 20130101; H04M 1/72558
20130101 |
International
Class: |
H04M 1/725 20060101
H04M001/725; G06F 3/0481 20130101 G06F003/0481 |
Claims
1. A method for providing a user interface in a communication
apparatus, said method comprising: responsive to a stream of audio
data, generating an audio activation signal; responsive to the
audio activation signal, switching circuitry for transmitting
graphics data to a display of the communication apparatus from a
low power mode to a working mode; and then modifying, in accordance
with the stream of audio data, the appearance of at least one user
interface component configured for invoking a function of the
communication apparatus.
2. The method of claim 1, further comprising: extracting at least
one audio feature from the stream of audio data; wherein the
appearance of the at least one user interface component is modified
in accordance with the extracted at least one audio feature.
3. The method of claim 2, further comprising: classifying an
extracted audio feature into one of a plurality of predetermined
feature representations; wherein the appearance of the at least one
user interface component is modified in accordance with the
predetermined feature representation for the extracted audio
feature.
4. The method of claim 1, wherein the at least one user interface
component comprises a graphical object.
5. The method of claim 1, further comprising: in the communication
apparatus, generating the stream of audio data.
6. A communication apparatus comprising: a display configured to
visualize a user interface comprising at least one user interface
component configured for invoking a function of the communication
apparatus; an audio detector configured to generate an audio
activation signal responsive to a stream of audio data; and a
module for transmitting graphics data to the display, the module
configured to switch from a low power mode to a working mode to
transmit graphics data to the display responsive to the audio
activation signal, and configured to modify, in the working mode,
the at least one user interface component in accordance with the
stream of audio data.
7. The apparatus of claim 6, wherein said apparatus is a mobile
communication terminal.
8. The apparatus of claim 6, wherein the module comprises: a
graphics engine configured to determine the graphics data to be
transmitted to the display; an audio feature extractor, configured
to extract an audio feature from the stream of audio data; and a
user interface modifier, configured to generate user interface
modification data for modifying a user interface component based on
the extracted audio feature, and to transmit the user interface
modification to the graphics engine.
9. The apparatus of claim 8, wherein the audio feature extractor
and the user interface modifier are further configured to switch
from a low power made to a working mode upon the module receiving
the audio activation signal.
10. The apparatus of claim 8, wherein the module further comprises:
an audio feature classifier configured to classify the extracted
audio feature into one of a set of predetermined feature
representations; wherein the user interface modifier generates user
interface modification data corresponding to the predetermined
feature representation corresponding to the extracted audio
feature.
11. The apparatus of claim 8, wherein the at least one user
interface component comprises a graphical object.
12. The apparatus of claim 8, wherein the module is a software
implemented module.
13. The apparatus of claim 8, wherein the module is a hardware
implemented module.
14. A non-transitory computer-readable medium having
computer-executable components comprising instructions that, when
executed in a communication apparatus, cause the apparatus to
perform a plurality of operations comprising: responsive to an
audio activation signal generated in response to a stream of audio
data, causing the switching of circuitry for transmitting graphics
data to a display of the communication apparatus from a low power
mode to a working mode; and then modifying, in accordance with the
stream of audio data, the appearance of at least one user interface
component configured for invoking a function of the communication
apparatus.
15. The non-transitory computer-readable medium of claim 14,
wherein the plurality of operations further comprises: extracting
at least one audio feature from the stream of audio data; wherein
the appearance of the at least one user interface component is
modified in accordance with the extracted at least one audio
feature.
16. The non-transitory computer-readable medium of claim 15,
wherein the plurality of operations further comprises: classifying
an extracted audio feature into one of a plurality of predetermined
feature representations; wherein the appearance of the at least one
user interface component is modified in accordance with the
predetermined feature representation for the extracted audio
feature.
17. The non-transitory computer-readable medium of claim 14,
wherein the at least one user interface component comprises a
graphical object.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation of U.S. application Ser.
No. 14/562,450, filed on Dec. 5, 2014, which is a continuation of
U.S. application Ser. No. 11/548,443, filed on 11 Oct. 2006, now
U.S. Pat. No. 8,930,002 which is incorporated herein by reference
in its entirety.
FIELD
[0002] The disclosed embodiments generally relate to a method for
providing a user interface modified in accordance to audio data, as
well as a module and an apparatus thereof.
BACKGROUND
[0003] Many mobile communication terminals of today includes a
music player, most often a so-called MP3 player and/or radio
receiver. A great advantage of having a music player included is
that, instead of two separate units, only one single unit is needed
for users asking for a mobile communication terminal as well as a
music player.
[0004] By including a music player in a mobile communication
terminal, some of the hardware of the mobile communication terminal
may be used by the music player as well. For instance, the display
may be used by the music player in order to show the title of the
song being played, the key board may be used in order to control
the music player, etc.
[0005] Although a number of hardware synergies may be achieved by
running a music player on the same platform as a mobile
communication terminal, there is a need to more closely connect the
music player to the mobile communication terminal in order to
increase the customer satisfaction.
SUMMARY
[0006] In view of the above, the disclosed embodiments aim to solve
or at least reduce the problems discussed above. In more
particular, an advantage of the disclosed embodiments is to provide
a user interface which is modified in accordance to audio data.
[0007] Generally, the a method for providing a user interface
modified in accordance to extracted audio features, and an
associated module and apparatus according to the attached
independent claims is provided.
[0008] In a first aspect, the disclosed embodiments are directed to
a method for providing a user interface of an apparatus, said user
interface comprising a number of user interface components, said
method comprising
[0009] receiving audio data, extracting at least one audio feature
from said audio data, and modifying the appearance of at least one
of said number of user interface components in accordance to said
extracted audio feature.
[0010] An advantage of this is that the user interface of the
apparatus is made more alive, which will increase the user
satisfaction.
[0011] Another advantage is that the user interface may be used at
the same time as music visualization effects are shown. This
implies that the apparatus may be utilised as usually, although
music visualization effects is being shown on the display.
[0012] Still another advantage is that the user interface of the
apparatus will vary in accordance to the audio data generated by
the music player. This implies that the music player and the other
functions of the apparatus are perceived by the user as one
apparatus, not as an apparatus which can, for instance, be
transformed from a communication apparatus into a music player
apparatus.
[0013] In the method according to the first aspect, the reception,
the extraction and the modification may be repeated.
[0014] Further, in the method according to the first aspect, the
user interface components may be 3-D rendered graphical
objects.
[0015] An advantage of having 3-D rendered graphical objects is
that a more sophisticated user interface may be utilised.
[0016] In the method according to the first aspect, the 3-D
rendered graphical objects may be hardware accelerated.
[0017] An advantage of this is that the responsivity of the 3-D
graphical objects of the user interface may be increased, which
means that the user interface is quicker.
[0018] In the method according to the first aspect, the audio
visualization effects may be superposed upon the 3-D rendered
graphical objects.
[0019] In the method according to the first aspect, the
modification may comprise classifying said extracted audio feature
into one of a plurality of predetermined feature representations,
and modifying the appearance of at least one of said number of user
interface components in accordance to said one predetermined
feature representation.
[0020] By having a number of predetermined feature representations
determined in advance, the extracted audio feature may be
classified into one of these predetermined representations. This
implies that the classification may be made quicker and less
computational power is needed. This is an advantage.
[0021] In the method according to the first aspect, the
modification of said user interface components may be made in
accordance to one of a set of user interface (UI) modification
themes.
[0022] A UI modification theme may comprise information of how the
extracted audio feature(s) is to be presented in the UI. For
instance, the extracted audio feature(s) may be presented as a
histogram superposed on a 3-D rendered UI component, or the
extracted audio feature(s) may be presented as a number of circles
superposed on a 3-D rendered UI component.
[0023] An advantage of this is that the way in which the
modification of the user interface components is made may easily be
chosen by the user of the apparatus.
[0024] In the method according to the first aspect, the set of UI
modification themes may be user configurable.
[0025] In the method according to the first aspect, at least a
number of said UI components may be modified, wherein each of said
number of UI components may be modified in accordance to each
respectively assigned audio feature.
[0026] An advantage of this is that different user interface
components may be modified differently. For example, a first user
interface component may be modified according to base frequencies,
and a second user interface component may be modified in accordance
to treble frequencies.
[0027] In a second aspect, the disclosed embodiments are directed
to a module comprising an audio feature extractor configured to
receive a stream of audio data and to extract at least one feature
of said stream of audio data, and a user interface modifier
configured to determine user interface modification data based upon
said extracted feature.
[0028] An advantage of this second aspect is that one or several of
the user interface components may be modified in accordance to the
audio data.
[0029] The module according to the second aspect may further
comprise an audio detector configured to detect an audio activation
signal and to activate said audio feature extractor or said user
interface modifier upon detection.
[0030] An advantage of this is that the audio feature extractor and
the user interface modifier may be in a low power mode until audio
data is being generated. When, for example, audio data is being
generated, an audio activation signal may be transmitted to the
audio feature extractor or the UI modifier, and the power mode of
the module may then be switched to a high power mode, or, in other
words, working mode. Hence, the power efficiency of the module may
be increased by having an audio detector present.
[0031] The module according to the second aspect may further
comprise a memory arranged to hold user interface modification
settings.
[0032] An advantage of having a memory arranged to hold user
interface settings is that no memory capacity of the apparatus is
used for holding the user interface settings. This implies that
less changes of the apparatus, in which the module is comprised,
are needed.
[0033] The module according to the second aspect may further
comprise an audio feature classifier configured to classify said at
least one feature into one of a set of predetermined feature
representations.
[0034] An advantage of this is that the audio feature classifier
can be a hardware module or a software module specialized in this
kind of classification, which implies that less time and
computational power are needed.
[0035] The module according to the second aspect may further
comprise a memory arranged to hold predetermined feature
representations.
[0036] In a third aspect, the disclosed embodiments are directed to
an apparatus comprising a display configured to visualize a user
interface comprising a number of user interface components, a music
player configured to generate audio data, a module configured to
determine user interface modification data, and a graphics engine
configured to modify said user interface component in accordance to
said determined user interface modification data.
[0037] An advantage of this third aspect is that one or several of
the user interface components may be modified in accordance to the
audio data.
[0038] In the apparatus according to the third aspect, an audio
activation signal may be transmitted from said music player to said
module.
[0039] An advantage of this is that the module may be in a low
power mode until audio data is being generated. When audio data is
being generated and the audio activation signal is transmitted to
the module, the power mode of the module may then be switched to a
high power mode, or, in other words, working mode. Hence, the power
efficiency of the apparatus may be increased by sending an audio
signal to the module.
[0040] In the apparatus according to the third aspect, the
apparatus may be a mobile communication terminal.
[0041] In the apparatus according to the third aspect, the user
interface components may be 3-D rendered objects.
[0042] An advantage of having 3-D rendered graphical objects is
that a more sophisticated user interface may be utilised.
[0043] In the apparatus according to the third aspect, the audio
visualization effects may be superposed onto said user interface
components.
[0044] In a fourth aspect, the disclosed embodiments are directed
to a computer-readable medium having computer-executable components
comprising instructions for receiving audio data, extracting at
least one audio feature from said audio data, and modifying the
appearance of at least one of said number of user interface
components in accordance to said extracted audio feature.
[0045] In the computer-readable medium according to the fourth
aspect, the reception, the extraction and the modification may be
repeated.
[0046] In the computer-readable medium according to the fourth
aspect, the user interface components may be 3-D rendered graphical
objects.
[0047] In the computer-readable medium according to the fourth
aspect, the modification may comprise classifying said found audio
feature into a predetermined feature representation, and modifying
the appearance of at least one of said number of user interface
components in accordance to said predetermined feature
representation.
[0048] Other features and advantages of the disclosed embodiments
will appear from the following detailed disclosure, from the
attached dependent claims as well as from the drawings.
[0049] Generally, all terms used in the claims are to be
interpreted according to their ordinary meaning in the technical
field, unless explicitly defined otherwise herein. All references
to "a/an/the [element, device, component, means, step, etc.]" are
to be interpreted openly as referring to at least one instance of
said element, device, component, means, step, etc., unless
explicitly stated otherwise. The steps of any method disclosed
herein do not have to be performed in the exact order disclosed,
unless explicitly stated.
BRIEF DESCRIPTION OF THE DRAWINGS
[0050] The above, as well as additional objects, features and
advantages of the disclosed embodiments, will be better understood
through the following illustrative and non-limiting detailed
description of the disclosed embodiments, with reference to the
appended drawings, where the same reference numerals will be used
for similar elements, wherein:
[0051] FIG. 1 is a flow chart of an embodiment of a method for
modifying a user interface component in accordance to audio
data.
[0052] FIG. 2 schematically illustrates a module according to the
disclosed embodiments.
[0053] FIG. 3 schematically illustrates an apparatus according to
the disclosed embodiments.
[0054] FIG. 4 illustrates an example of a user interface with user
interface components being modified in accordance to audio
data.
DETAILED DESCRIPTION
[0055] FIG. 1 is a flow chart illustrating a method according to
the disclosed embodiments describing the general steps of modifying
a user interface component in accordance to audio data.
[0056] In a first step, 100, audio data is received. The audio data
may be a current part of a stored audio file being played by a
music player, or, alternatively, a current part of an audio stream
received by an audio data receiver.
[0057] Next, in a second step, 102, an audio feature is extracted
from the received audio data. Such an audio feature may be a
frequency spectrum of the audio data.
[0058] Finally, in a third step, 104, one or several user interface
components are modified in accordance to the extracted audio
feature.
[0059] The third step, 104, may be subdivided into a first substep,
106, in which the extracted audio feature is classified into a
predetermined feature representation. Thereafter, in a second
substep, 108, the user interface component is modified in
accordance to the predetermined feature representation.
[0060] By using predetermined feature representations, a number of
user interface component appearance state images may be used. This
implies that less computational power is needed in order to modify
the user interface components in accordance to the audio data.
[0061] The user interface components can be 3-D rendered objects.
Additionally, audio visualization effects can be superposed upon
the 3-D rendered objects. Then, when receiving audio data and
extracting an audio feature, the audio visualization effects are
changed, which means that the appearance of the user interface
components vary in accordance to the audio data.
[0062] Alternatively, 2-D objects may be used as user interface
components. As in the case of 3-D rendered objects, audio
visualization effects, which varies in accordance to the audio
data, may be superposed upon the 2-D objects.
[0063] Alternatively, instead of having superposed audio
visualization effects, the size of one or several user interface
components may be modified in accordance to the extracted audio
features. For instance, the user interface components may be
configured to change size in accordance to the amount of base
frequencies in the audio data. In this way, during a drum solo the
size of the user interface component will be large, and during a
guitar solo the size will be small Other options are that the
colour, the orientation, the shape, the animation speed or other
animation-specific attributes, such as zooming level in fractal
animation, of the user interface components change in accordance to
the audio data.
[0064] If so-called environment mapping is utilised, existing
solutions for music visualization may be used. This is an advantage
since no new algorithms must be developed. Another advantage of
using so-called environment mapping is that a dynamically changing
environment map emphasizes the shape of a 3-D object, making UI
components easier to recognize.
[0065] Optionally, different user interface components may be
associated to different frequencies. For instance, when playing a
rock song comprising several different frequencies, a first user
interface component, such as a "messages" icon, may change in
accordance to high frequencies, i.e. treble frequencies, and a
second user interface component, such as "contacts" icon, may
change in accordance to low frequencies, i.e. base frequencies.
[0066] The procedure of receiving audio data, 100, extracting audio
feature, 102, and modifying a UI component in accordance to the
extracted audio feature, 104, may be repeated continuously as long
as audio data is received. The procedure may, for instance, be
repeated once every time the display is updated.
[0067] FIG. 2 schematically illustrates a module 200. The module
200 may be a software implemented module or a hardware implemented
module, such as an ASIC, or a combination thereof, such as an FPGA
circuit.
[0068] Audio data can be input to an audio feature extractor 202.
Thereafter, one or several audio features can be extracted from the
audio data, and then the extracted features can be transmitted to a
user interface (UI) modifier 204. UI modification data can be
generated in the UI modifier 204 based upon the extracted audio
feature(s). After having generated UI modification data, this data
can be output from the module 200.
[0069] The UI modification data may be data representing the
extracted audio feature(s). Then, a graphics engine (not shown) is
configured to receive the UI modification data, and based upon this
UI modification data and original graphics data, the graphic engine
is configured to determine graphics data comprising audio
visualization effects.
[0070] Alternatively, the UI modification data may be complete
graphics data containing audio visualization effects. In other
words, the graphics engine may be contained within said module
200.
[0071] Optionally, the module may further comprise an audio feature
classifier 206. The function of the audio feature classifier 206
can be to find characteristic features of the audio signal. Such a
characteristic feature may be the amount of audio data
corresponding to a certain frequency, such as a base frequency or a
treble frequency. Alternatively, if different UI components are
corresponding to different characteristic features, a number of
characteristic features may be determined in the audio feature
classifier 206.
[0072] If an audio feature classifier 206 is present, a memory 208
comprising a number of predetermined feature representations may be
present as well. A predetermined feature representation may, for
instance, be the amount of audio data corresponding to a sound
between 20 Hz and 100 Hz. The number of predetermined feature
representations, i.e. the resolution of the classification, may be
user configurable, as well as the limits of each of the
predetermined feature representations.
[0073] Optionally, the module 200 may comprise an audio detector
209 configured to receive an audio activation signal. The audio
activation signal may be transmitted from the music player when the
playing of a song is started, or, alternatively, when the radio is
switched on. When the audio detection signal is received, an audio
activation signal is transmitted to the audio feature extractor
202, the UI modifier 204 or the audio feature classifier 206.
[0074] Optionally, the module 200 may further comprise a memory 210
containing UI modification themes. A UI modification theme may
comprise information of how the extracted audio feature(s) is to be
presented in the UI. For instance, the extracted audio feature(s)
may be presented as a histogram superposed on a 3-D rendered UI
component, or the extracted audio feature(s) may be presented as a
number of circles superposed on a 3-D rendered UI component.
[0075] FIG. 3 schematically illustrates an apparatus 300, such as a
mobile communication terminal, comprising the module 200, a music
player 302, a graphics engine 304, a display 306, optionally a
keypad 308 and optionally an audio output 310, such as a
loudspeaker or a head phone output.
[0076] When a song is started in the music player 302, which start
may be made after having received key input actuation data from the
keypad 308, audio data and, optionally, an audio activation signal,
are transmitted from the music player 302 to the module 200.
Optionally, audio data may also be transmitted to the audio output
310.
[0077] The module 200 is configured to generate UI modification
data from extracted audio features of the audio data as is
described above. The UI modification data generated by the module
200 can be transmitted to the graphics engine 304. The graphics
engine 304 can, in turn, be configured to generate graphics data
presenting the extracted features of the audio data by using the UI
modification data.
[0078] After having determined the graphics data, this data may be
transmitted to the display 306, where it is shown to the user of
the apparatus 300. Alternatively, if the graphics engine 304 is
comprised within the module 200, graphics data is transmitted
directly from the module 200 to the display 306.
[0079] FIG. 4 illustrates an example of a user interface 400 with
user interface components being modified in accordance to audio
data.
[0080] A first user interface component may be illustrated as a
"music" icon comprising a 3-D cuboid 402. Audio visualization
effects in the form of a frequency diagram 404 can be superposed on
the sides of the 3-D cuboid 402. Moreover, an identifying text
"MUSIC" 406 may be available in connection to the 3-D cuboid
402.
[0081] A second user interface component illustrates a "messages"
icon comprising a 3-D cylinder 408. Audio visualization effects in
the form of a number of rings 410a, 410b, 410c may be superposed on
the top of the 3-D cylinder 408. Moreover, an identifying text
"MESSAGES" 412 may be available in connection to the 3-D cylinder
408.
[0082] A third user interface component illustrates a "contacts"
icon comprising a 3-D cylinder 414. Audio visualization effects in
the form of a 2-D frequency representation 416 may be superposed on
the top of the 3-D cylinder 414. Moreover, an identifying text
"CONTACTS" 418 may be available in connection to the 3-D cylinder
414.
[0083] A fourth user interface component illustrates an "Internet"
icon comprising a 3-D cuboid 420. Audio visualization effects in
the form of a number of stripes 422a, 422b, 422c may be superposed
on the sides of the 3-D cuboid 420. Moreover, an identifying text
"Internet" 424 may be available in connection to the 3-D cuboid
420.
[0084] The disclosed embodiments have mainly been described above
with reference to a few embodiments. However, as is readily
appreciated by a person skilled in the art, other embodiments than
the ones disclosed above are equally possible within the scope of
the disclosed embodiments, as defined by the appended patent
claims.
* * * * *