U.S. patent application number 14/037252 was filed with the patent office on 2015-03-19 for presenting audio based on biometrics parameters.
This patent application is currently assigned to SONY CORPORATION. The applicant listed for this patent is SONY CORPORATION. Invention is credited to Steven Friedlander, Sabrina Tai-Chen Yeh, David Andrew Young.
Application Number | 20150081066 14/037252 |
Document ID | / |
Family ID | 51228977 |
Filed Date | 2015-03-19 |
United States Patent
Application |
20150081066 |
Kind Code |
A1 |
Yeh; Sabrina Tai-Chen ; et
al. |
March 19, 2015 |
PRESENTING AUDIO BASED ON BIOMETRICS PARAMETERS
Abstract
A device includes at least one computer readable storage medium
bearing instructions executable by a processor, and at least one
processor configured for accessing the computer readable storage
medium to execute the instructions. The instructions configure the
processor for receiving signals from at least one biometric sensor
of an exerciser, based at least in part on the signals from the
biometric sensor, selecting a music piece, and playing the music
piece on a speaker.
Inventors: |
Yeh; Sabrina Tai-Chen;
(Laguna Beach, CA) ; Friedlander; Steven; (San
Diego, CA) ; Young; David Andrew; (San Diego,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
SONY CORPORATION |
Tokyo |
|
JP |
|
|
Assignee: |
SONY CORPORATION
Tokyo
JP
|
Family ID: |
51228977 |
Appl. No.: |
14/037252 |
Filed: |
September 25, 2013 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61878835 |
Sep 17, 2013 |
|
|
|
Current U.S.
Class: |
700/94 |
Current CPC
Class: |
A63B 71/06 20130101;
G01C 21/00 20130101; H04M 2250/12 20130101; G16H 40/67 20180101;
H04M 2250/02 20130101; G06F 3/017 20130101; A61B 5/1123 20130101;
G01S 19/19 20130101; A61B 5/021 20130101; A61B 5/7415 20130101;
G06F 16/60 20190101; G06F 3/165 20130101; A61B 5/1172 20130101;
A61B 5/4815 20130101; A61B 5/11 20130101; A61B 5/1176 20130101;
G01C 21/20 20130101; A61B 5/02438 20130101; H04B 5/0025 20130101;
H04L 63/0853 20130101; A61B 5/14542 20130101; H04M 2250/04
20130101; G06F 3/0481 20130101; H04M 1/7253 20130101; A61B 5/14532
20130101; G08B 25/016 20130101; G16H 50/20 20180101; H04W 4/80
20180201; G09B 19/0038 20130101; A61B 5/02055 20130101; G16H 40/63
20180101; G06F 3/0484 20130101; G16H 20/30 20180101; G06Q 10/0639
20130101; G16H 50/30 20180101; G10L 15/00 20130101 |
Class at
Publication: |
700/94 |
International
Class: |
G06F 17/30 20060101
G06F017/30 |
Claims
1. A device comprising: at least one computer readable storage
medium bearing instructions executable by a processor; at least one
processor configured for accessing the computer readable storage
medium to execute the instructions to configure the processor for:
receiving signals from at least one biometric sensor of an
exerciser; based at least in part on the signals from the biometric
sensor, selecting a music piece; and playing the music piece on a
speaker.
2. The device of claim 1, wherein the biometric sensor is a heart
rate sensor.
3. The device of claim 1, wherein the processor when executing the
instructions is configured for: selecting the music piece based at
least in part on determining that a tempo of the music piece
matches a tempo indicated in the signals from the biometric
sensor.
4. The device of claim 1, wherein the processor when executing the
instructions is configured for selecting the music piece from a
music piece library associated with the exerciser.
5. The device of claim 1, wherein the processor when executing the
instructions is configured for: selecting the music piece from a
general music piece library; and providing output to the exerciser
prompting the exerciser to purchase the music piece.
6. The device of claim 1, wherein the processor when executing the
instructions is configured for: determining whether a heart rate of
the exerciser as indicated by signals from the biometric sensor
exceeds a threshold; responsive to a determination that the heart
rate exceeds the threshold, decreasing a tempo of the music piece;
and responsive to a determination that the heart rate does not
exceed the threshold, not decreasing the tempo of the music.
7. The device of 1, wherein the processor when executing the
instructions is configured for: determining whether a heart rate of
the exerciser as indicated by signals from the biometric sensor is
below a threshold; responsive to a determination that the heart
rate is below the threshold, increasing a tempo of the music piece;
and responsive to a determination that the heart rate exceeds the
threshold, not increasing the tempo of the music piece.
8. The device of claim 1, wherein the processor when executing the
instructions is configured for selecting the music piece based at
least in part on: accessing metadata associated with the music
piece indicating a tempo of the music piece; and determining
whether the tempo of the music piece matches a tempo indicated in
the signals from the biometric sensor.
9. Method comprising: receiving signals from at least one biometric
sensor indicating at least one biometric parameter of a person; and
keying music being played to the person to the signals.
10. The method of claim 9, wherein the biometric parameter defines
a rate, and the rate is used to select a music piece the tempo of
which substantially matches the rate.
11. The method of claim 9, wherein the biometric parameter defines
a rate, and the rate is used to alter a tempo of a music piece to
substantially match the rate.
12. The method of claim 9, further comprising: selecting the music
based at least in part on determining that a tempo of the music
matches a tempo indicated in the signals from the biometric
sensor.
13. The method of claim 9, comprising selecting the music from a
music piece library associated with the person.
14. The method of claim 9, comprising: selecting the music from a
general music piece library; and providing output to the person
prompting the person to purchase the music.
15. The method of claim 9, comprising: determining whether a heart
rate of the person as indicated by signals from the biometric
sensor exceeds a threshold; responsive to a determination that the
heart rate exceeds the threshold, decreasing a tempo of the music;
and responsive to a determination that the heart rate does not
exceed the threshold, not decreasing the tempo of the music.
16. The method of 9, comprising: determining whether a heart rate
of the person as indicated by signals from the biometric sensor is
below a threshold; responsive to a determination that the heart
rate is below the threshold, increasing a tempo of the music; and
responsive to a determination that the heart rate exceeds the
threshold, not increasing the tempo of the music.
17. The method of claim 9, comprising selecting the music based at
least in part on: accessing metadata associated with the music
indicating a tempo of the music; and determining whether the tempo
of the music matches a tempo indicated in the signals from the
biometric sensor.
18. A computer readable storage medium that is not a carrier wave,
the computer readable storage medium bearing instructions which
when executed by a processor configure the processor to execute
logic comprising: receiving signals from at least one biometric
sensor indicating a cadence of physical activity of a person; and
based at least in part on the cadence, establishing a playlist of
music.
19. The computer readable storage medium of claim 18, wherein the
playlist includes music files that each respectively include a
first music tempo that is at least substantially similar to the
cadence.
20. The computer readable storage medium of claim 19, wherein the
cadence is a first cadence, and wherein the instructions configure
the processor to automatically alter the playlist responsive to a
change in cadence of the physical activity of the person, the
change in cadence establishing a second cadence, the altered
playlist including music files that respectively include a second
music tempo that is at least substantially similar to the second
cadence.
Description
[0001] This application claims priority to U.S. provisional patent
application Ser. No. 61/878,835, filed Sep. 17, 2013.
I. FIELD OF THE INVENTION
[0002] The present application relates generally to digital
ecosystems that are configured for use when engaging in physical
activity and/or fitness exercises.
II. BACKGROUND OF THE INVENTION
[0003] Society is becoming increasingly health-conscious. A wide
variety of exercise and workouts are now offered to encourage
people to stay fit through exercise. As understood herein, while
stationary exercise equipment often comes equipped with data
displays for the information of the exerciser, the information is
not tailored to the individual and is frequently repetitive and
monotonous. As further understood herein, people enjoy listening to
music as workout aids but the music typically is whatever is
broadcast within a gymnasium or provided on a recording device the
user may wear, again being potentially monotonous and unchanging in
pattern and beat in a way that is uncoupled from the actual
exercise being engaged in.
[0004] Thus, while present principles recognize that consumer
electronics (CE) devices may be used while engaged in physical
activity to enhance the activity, most audio and/or visual aids are
static in terms of not being tied to the actual exercise.
SUMMARY OF THE INVENTION
[0005] Present principles recognize that portable aids can be
provided to improve exercise performance, provide inspiration,
enable the sharing of exercise performance for social reasons, help
fulfill a person's exercise goals, analyze and track exercise
results, and provide virtual coaching to exercise participants in
an easy, intuitive manner.
[0006] Accordingly, a device includes at least one computer
readable storage medium bearing instructions executable by a
processor, and at least one processor configured for accessing the
computer readable storage medium to execute the instructions. The
instructions configure the processor for receiving signals from at
least one biometric sensor of an exerciser, and based at least in
part on the signals from the biometric sensor, selecting a music
piece, and then playing the music piece on a speaker.
[0007] In some embodiments the biometric sensor may be a heart rate
sensor. Also in some embodiments, the processor when executing the
instructions may be configured for selecting the music piece based
at least in part on determining that a tempo of the music piece
matches a tempo indicated in the signals from the biometric sensor.
Also in some embodiments, the processor when executing the
instructions may be configured for selecting the music piece from a
music piece library associated with the exerciser, and/or selecting
the music piece from a general music piece library and providing
output to the exerciser prompting the exerciser to purchase the
music piece.
[0008] Furthermore, if desired the processor when executing the
instructions may be configured for determining whether a heart rate
of the exerciser as indicated by signals from the biometric sensor
exceeds a threshold. The processor may also be configured for
decreasing a tempo of the music piece responsive to a determination
that the heart rate exceeds the threshold and not decreasing the
tempo of the music responsive to a determination that the heart
rate does not exceed the threshold.
[0009] Even further, if desired the processor when executing the
instructions may be configured for determining whether a heart rate
of the exerciser as indicated by signals from the biometric sensor
is below a threshold. The processor may also be configured for
increasing a tempo of the music piece responsive to a determination
that the heart rate is below the threshold and not increasing the
tempo of the music piece responsive to a determination that the
heart rate exceeds the threshold.
[0010] Further still, in some embodiments the processor when
executing the instructions may be configured for selecting the
music piece based at least in part on accessing metadata associated
with the music piece indicating a tempo of the music piece, and
determining whether the tempo of the music piece matches a tempo
indicated in the signals from the biometric sensor.
[0011] In another aspect, a method includes receiving signals from
at least one biometric sensor indicating at least one biometric
parameter of a person and keying music being played to the person
to the signals.
[0012] In still another aspect, a computer readable storage medium
that is not a carrier wave bears instructions which when executed
by a processor configure the processor to execute logic including
receiving signals from at least one biometric sensor indicating a
cadence of physical activity of a person, and based at least in
part on the cadence, establishing a playlist of music.
[0013] The details of the present invention, both as to its
structure and operation, can best be understood in reference to the
accompanying drawings, in which like reference numerals refer to
like parts, and in which:
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] FIG. 1 is a block diagram of an example system including an
example CE device in accordance with present principles;
[0015] FIGS. 2-4 are example flowcharts of logic to be executed by
a CE device for providing information and/or music to a user during
physical activity in accordance with present principles;
[0016] FIG. 5 is an example flowchart of logic to be executed by a
server for providing music and/or information to a CE device in
accordance with present principles;
[0017] FIGS. 6-9 are example user interfaces (UIs) presentable on a
CE device in accordance with present principles; and
[0018] FIGS. 10 and 11 are exemplary illustrations that demonstrate
present principles.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
[0019] This disclosure relates generally to consumer electronics
(CE) device based user information. With respect to any computer
systems discussed herein, a system herein may include server and
client components, connected over a network such that data may be
exchanged between the client and server components. The client
components may include one or more computing devices including
portable televisions (e.g. smart TVs, Internet-enabled TVs),
portable computers such as laptops and tablet computers, and other
mobile devices including smart phones and additional examples
discussed below. These client devices may employ, as non-limiting
examples, operating systems from Apple, Google, or Microsoft. A
Unix operating system may be used. These operating systems can
execute one or more browsers such as a browser made by Microsoft or
Google or Mozilla or other browser program that can access web
applications hosted by the Internet servers over a network such as
the Internet, a local intranet, or a virtual private network.
[0020] As used herein, instructions refer to computer-implemented
steps for processing information in the system. Instructions can be
implemented in software, firmware or hardware; hence, illustrative
components, blocks, modules, circuits, and steps are set forth in
terms of their functionality.
[0021] A processor may be any conventional general purpose single-
or multi-chip processor that can execute logic by means of various
lines such as address lines, data lines, and control lines and
registers and shift registers. Moreover, any logical blocks,
modules, and circuits described herein can be implemented or
performed, in addition to a general purpose processor, in or by a
digital signal processor (DSP), a field programmable gate array
(FPGA) or other programmable logic device such as an application
specific integrated circuit (ASIC), discrete gate or transistor
logic, discrete hardware components, or any combination thereof
designed to perform the functions described herein. A processor can
be implemented by a controller or state machine or a combination of
computing devices.
[0022] Any software modules described by way of flow charts and/or
user interfaces herein can include various sub-routines,
procedures, etc. It is to be understood that logic divulged as
being executed by a module can be redistributed to other software
modules and/or combined together in a single module and/or made
available in a shareable library.
[0023] Logic when implemented in software, can be written in an
appropriate language such as but not limited to C# or C++, and can
be stored on or transmitted through a computer-readable storage
medium such as a random access memory (RAM), read-only memory
(ROM), electrically erasable programmable read-only memory
(EEPROM), compact disk read-only memory (CD-ROM) or other optical
disk storage such as digital versatile disc (DVD), magnetic disk
storage or other magnetic storage devices including removable thumb
drives, etc. A connection may establish a computer-readable medium.
Such connections can include, as examples, hard-wired cables
including fiber optics and coaxial wires and digital subscriber
line (DSL) and twisted pair wires. Such connections may include
wireless communication connections including infrared and
radio.
[0024] In an example, a processor can access information over its
input lines from data storage, such as the computer readable
storage medium, and/or the processor accesses information
wirelessly from an Internet server by activating a wireless
transceiver to send and receive data. Data typically is converted
from analog signals to digital and then to binary by circuitry
between the antenna and the registers of the processor when being
received and from binary to digital to analog when being
transmitted. The processor then processes the data through its
shift registers to output calculated data on output lines, for
presentation of the calculated data on the CE device.
[0025] Components included in one embodiment can be used in other
embodiments in any appropriate combination. For example, any of the
various components described herein and/or depicted in the Figures
may be combined, interchanged or excluded from other
embodiments.
[0026] "A system having at least one of A, B, and C" (likewise "a
system having at least one of A, B, or C" and "a system having at
least one of A, B, C") includes systems that have A alone, B alone,
C alone, A and B together, A and C together, B and C together,
and/or A, B, and C together, etc.
[0027] Before describing FIG. 1, it is to be understood that the CE
devices and software described herein are understood to be usable
in the context of a digital ecosystem. Thus, as understood herein,
a computer ecosystem, or digital ecosystem, may be an adaptive and
distributed socio-technical system that is characterized by its
sustainability, self-organization, and scalability. Inspired by
environmental ecosystems, which consist of biotic and abiotic
components that interact through nutrient cycles and energy flows,
complete computer ecosystems consist of hardware, software, and
services that in some cases may be provided by one company, such as
Sony Electronics. The goal of each computer ecosystem is to provide
consumers with everything that may be desired, at least in part
services and/or software that may be exchanged via the Internet.
Moreover, interconnectedness and sharing among elements of an
ecosystem, such as applications within a computing cloud, provides
consumers with increased capability to organize and access data and
presents itself as the future characteristic of efficient
integrative ecosystems.
[0028] Two general types of computer ecosystems exist: vertical and
horizontal computer ecosystems. In the vertical approach, virtually
all aspects of the ecosystem are associated with the same company
(e.g. produced by the same manufacturer), and are specifically
designed to seamlessly interact with one another. Horizontal
ecosystems, one the other hand, integrate aspects such as hardware
and software that are created by differing entities into one
unified ecosystem. The horizontal approach allows for greater
variety of input from consumers and manufactures, increasing the
capacity for novel innovations and adaptations to changing demands.
But regardless, it is to be understood that some digital
ecosystems, including those referenced herein, may embody
characteristics of both the horizontal and vertical ecosystems
described above.
[0029] Accordingly, it is to be further understood that these
ecosystems may be used while engaged in physical activity to e.g.
provide inspiration, goal fulfillment and/or achievement, automated
coaching/training, health and exercise analysis, convenient access
to data, group sharing (e.g. of fitness data), and increased
accuracy of health monitoring, all while doing so in a stylish and
entertaining manner. Further still, the devices disclosed herein
are understood to be capable of making diagnostic determinations
based on data from various sensors (such as those described below
in reference to FIG. 1) for use while exercising, for exercise
monitoring (e.g. in real time), and/or for sharing of data with
friends (e.g. using a social networking service) even when not all
people have the same types and combinations of sensors on their
respective CE devices.
[0030] Thus, it is to be understood that the CE devices described
herein may allow for easy and simplified user interaction with the
device so as to not be unduly bothersome or encumbering e.g.
before, during, and after an exercise.
[0031] It is to also be understood that the CE device processors
described herein can access information over its input lines from
data storage, such as the computer readable storage medium, and/or
the processor(s) accesses information wirelessly from an Internet
server by activating a wireless transceiver to send and receive
data. Data typically is converted from analog signals to digital
and then to binary by circuitry between the antenna and the
registers of the processor when being received and from binary to
digital to analog when being transmitted. The processor then
processes the data through its shift registers according to
algorithms such as those described herein to output calculated data
on output lines, for presentation of the calculated data on the CE
device.
[0032] Now specifically referring to FIG. 1, an example system 10
is shown, which may include one or more of the example devices
mentioned above and described further below to enhance fitness
experiences in accordance with present principles. The first of the
example devices included in the system 10 is an example consumer
electronics (CE) device 12 that may be waterproof (e.g., for use
while swimming). The CE device 12 may be, e.g., a computerized
Internet enabled ("smart") telephone, a tablet computer, a notebook
computer, a wearable computerized device such as e.g. computerized
Internet-enabled watch, a computerized Internet-enabled bracelet,
other computerized Internet-enabled fitness devices, a computerized
Internet-enabled music player, computerized Internet-enabled head
phones, a computerized Internet-enabled implantable device such as
an implantable skin device, etc., and even e.g. a computerized
Internet-enabled television (TV). Regardless, it is to be
understood that the CE device 12 is configured to undertake present
principles (e.g. communicate with other CE devices to undertake
present principles, execute the logic described herein, and perform
any other functions and/or operations described herein).
[0033] Accordingly, to undertake such principles the CE device 12
can include some or all of the components shown in FIG. 1. For
example, the CE device 12 can include one or more touch-enabled
displays 14, one or more speakers 16 for outputting audio in
accordance with present principles, and at least one additional
input device 18 such as e.g. an audio receiver/microphone for e.g.
entering audible commands to the CE device 12 to control the CE
device 12. The example CE device 12 may also include one or more
network interfaces 20 for communication over at least one network
22 such as the Internet, an WAN, an LAN, etc. under control of one
or more processors 24. It is to be understood that the processor 24
controls the CE device 12 to undertake present principles,
including the other elements of the CE device 12 described herein
such as e.g. controlling the display 14 to present images thereon
and receiving input therefrom. Furthermore, note the network
interface 20 may be, e.g., a wired or wireless modem or router, or
other appropriate interface such as, e.g., a wireless telephony
transceiver, WiFi transceiver, etc.
[0034] In addition to the foregoing, the CE device 12 may also
include one or more input ports 26 such as, e.g., a USB port to
physically connect (e.g. using a wired connection) to another CE
device and/or a headphone port to connect headphones to the CE
device 12 for presentation of audio from the CE device 12 to a user
through the headphones. The CE device 12 may further include one or
more tangible computer readable storage medium 28 such as
disk-based or solid state storage, it being understood that the
computer readable storage medium 28 may not be a carrier wave. Also
in some embodiments, the CE device 12 can include a position or
location receiver such as but not limited to a GPS receiver and/or
altimeter 30 that is configured to e.g. receive geographic position
information from at least one satellite and provide the information
to the processor 24 and/or determine an altitude at which the CE
device 12 is disposed in conjunction with the processor 24.
However, it is to be understood that that another suitable position
receiver other than a GPS receiver and/or altimeter may be used in
accordance with present principles to e.g. determine the location
of the CE device 12 in e.g. all three dimensions.
[0035] Continuing the description of the CE device 12, in some
embodiments the CE device 12 may include one or more cameras 32
that may be, e.g., a thermal imaging camera, a digital camera such
as a webcam, and/or a camera integrated into the CE device 12 and
controllable by the processor 24 to gather pictures/images and/or
video in accordance with present principles (e.g. to share aspects
of a physical activity such as hiking with social networking
friends). Also included on the CE device 12 may be a Bluetooth
transceiver 34 and other Near Field Communication (NFC) element 36
for communication with other devices using Bluetooth and/or NFC
technology, respectively. An example NFC element can be a radio
frequency identification (RFID) element.
[0036] Further still, the CE device 12 may include one or more
motion sensors 37 (e.g., an accelerometer, gyroscope, cyclometer,
magnetic sensor, infrared (IR) motion sensors such as passive IR
sensors, an optical sensor, a speed and/or cadence sensor, a
gesture sensor (e.g. for sensing gesture command), etc.) providing
input to the processor 24. The CE device 12 may include still other
sensors such as e.g. one or more climate sensors 38 (e.g.
barometers, humidity sensors, wind sensors, light sensors,
temperature sensors, etc.) and/or one or more biometric sensors 40
(e.g. heart rate sensors and/or heart monitors, calorie counters,
blood pressure sensors, perspiration sensors, odor and/or scent
detectors, fingerprint sensors, facial recognition sensors, iris
and/or retina detectors, DNA sensors, oxygen sensors (e.g. blood
oxygen sensors and/or VO2 max sensors), glucose and/or blood sugar
sensors, sleep sensors (e.g. a sleep tracker), pedometers and/or
speed sensors, body temperature sensors, nutrient and metabolic
rate sensors, voice sensors, lung input/output and other
cardiovascular sensors, etc.) also providing input to the processor
24. In addition to the foregoing, it is noted that in some
embodiments the CE device 12 may also include a kinetic energy
harvester 42 to e.g. charge a battery (not shown) powering the CE
device 12.
[0037] Still referring to FIG. 1, in addition to the CE device 12,
the system 10 may include one or more other CE device types such
as, but not limited to, a computerized Internet-enabled bracelet
44, computerized Internet-enabled headphones and/or ear buds 46,
computerized Internet-enabled clothing 48, a computerized
Internet-enabled exercise machine 50 (e.g. a treadmill, exercise
bike, elliptical machine, etc.), etc. Also shown is a computerized
Internet-enabled gymnasium entry kiosk 52 permitting authorized
entry to a gymnasium housing the exercise machine 50. It is to be
understood that other CE devices included in the system 10
including those described in this paragraph may respectively
include some or all of the various components described above in
reference to the CE device 12 such but not limited to e.g. the
biometric sensors and motion sensors described above, as well as
the position receivers, cameras, input devices, and speakers also
described above.
[0038] Thus, for instance, the headphones/ear buds 46 may include a
heart rate sensor configured to sense a person's heart rate when a
person is wearing the head phones, the clothing 48 may include
sensors such as perspiration sensors, climate sensors, and heart
sensors for measuring the intensity of a person's workout, the
exercise machine 50 may include a camera mounted on a portion
thereof for gathering facial images of a user so that the machine
50 may thereby determine whether a particular facial expression is
indicative of a user struggling to keep the pace set by the
exercise machine 50 and/or an NFC element to e.g. pair the machine
50 with the CE device 12 and hence access a database of preset
workout routines, and the kiosk 52 may include an NFC element
permitting entry to a person authenticated as being authorized for
entry based on input received from a complimentary NFC element
(such as e.g. the NFC element 36 on the device 12). Also note that
all of the devices described in reference to FIG. 1, including a
server 54 to be described shortly, may communicate with each other
over the network 22 using a respective network interface included
thereon, and may each also include a computer readable storage
medium that may not be a carrier wave for storing logic and/or
software code in accordance with present principles.
[0039] Now in reference to the afore-mentioned at least one server
54, it includes at least one processor 56, at least one tangible
computer readable storage medium 58 that may not be a carrier wave
such as disk-based or solid state storage, and at least one network
interface 60 that, under control of the processor 56, allows for
communication with the other CE devices of FIG. 1 over the network
22, and indeed may facilitate communication therebetween in
accordance with present principles. Note that the network interface
60 may be, e.g., a wired or wireless modem or router, WiFi
transceiver, or other appropriate interface such as, e.g., a
wireless telephony transceiver.
[0040] Accordingly, in some embodiments the server 54 may be an
Internet server, may facilitate fitness coordination and/or data
exchange between CE device devices in accordance with present
principles, and may include and perform "cloud" functions such that
the CE devices of the system 10 may access a "cloud" environment
via the server 54 in example embodiments to e.g. stream music to
listen to while exercising and/or pair two or more devices (e.g. to
"throw" music from one device to another).
[0041] Turning now to FIG. 2, an example flowchart of logic to be
executed by a CE device such as the CE device 12 in accordance with
present principles for presenting non-verbal audio cues is shown.
The logic begins at block 70 where the logic receives (e.g.
planned) exercise information, planned physical activity
information, planned exercise route information, etc. in accordance
with present principles and as discussed herein (e.g. a user inputs
the information using one of the user interfaces referenced
herein). For instance, at block 70 the logic may receive
information pertaining to a planned exercise route (e.g. a jog)
through the user's neighborhood (e.g. and may even use a user's
previous average pace on past jogs) such as the user's desired
pace, maximum time to completion of the route, etc. As another
example, the logic at block 70 may receive information indicating
that the user wishes to ride a bike for ten minutes at a moderately
fast pace, then ten minutes at a very fast pace, then ten minutes
of cooling down time, and indeed may even specify the desired miles
per hour for each one at which the user wishes to bicycle. As but
one more example, a user's personal trainer may set a workout
routine at the trainer's CE device and then transmit the routine to
the user's CE device for presentation thereon.
[0042] In any case, after block 70 the logic proceeds to block 72
where the logic determines music (e.g. one or more music files
stored on and/or accessible to the CE device) to match at least the
(e.g. estimated or user-indicated/desired) tempo and/or cadence of
at least the first segment of the user's exercise
routine/information (e.g. using the example above, at least selects
music matching a tempo for the user to bicycle at a moderately fast
pace to begin the routine). Note that the tempo to music matching
may be e.g. initially based on an estimate by the CE device of a
tempo/cadence the user should maintain to comport with the exercise
information (e.g., a certain tempo for pedaling the exercise
bicycle to maintain the desired speed). As another example, the
tempo to music matching may be estimated at first and then later
adjusted to match the actual cadence of the user after the
beginning of the workout. As such, e.g. the first song before a
user takes his or her first step on a jog may contain a tempo that
is estimated to be the pace the user will set and/or should
maintain, and thereafter the next song's tempo may be matched to
the actual pace of the user. For instance and in terms of matching
music to a user's actual pace, if the user is exercising at one
hundred fifty strides per minute, a piece of music may be presented
that includes one hundred fifty beats per minute for the user to
thereby set his or her pace by moving one stride for every musical
beat.
[0043] In addition, note that tempo of the music itself may be
determined by accessing metadata associated with the respective
music file that contains tempo information (e.g., in beats per
minute). As another example, the CE device may parse or otherwise
access the music file to identify a tempo (e.g. identify a beat
based on a repeated snare drum sound, inflections in a singer's
voice, the changing of guitar chords, etc.), and then use the
identified music tempo if it matches the user's pace/cadence (e.g.
as close as possible, e.g. accounting for minor variances in the
user's cadence as may naturally occur from step to step on a jog,
or revolution to revolution on an exercise bicycle). Thus, it may
be appreciated that e.g. at a time prior to receiving exercise
information at block 70, the CE device may access all music files
that are accessible to it (or e.g. a subset of files based on
genre, artist, song length, etc.) to determine the beats per minute
of each one, and then create a data table and/or metadata for later
access by the CE device for efficiently identifying music with a
tempo that matches the user's cadence at a given moment during an
exercise routine without e.g. having to at that time parse the
user's entire music library for music matching the user's
cadence.
[0044] Still in reference to FIG. 2, after block 72 the logic
proceeds to block 74 where the logic receives an instruction to
begin monitoring the user's exercise and thus to begin presenting
music in accordance with e.g. the cadence of the user. The logic
then proceeds to block 76 where the logic determines whether a turn
is upcoming, e.g. a left or right turn the user should make to
continue traveling on a pre-planned exercise route. Note that
although the present example will be discussed in terms of making a
turn, present principles apply equally to any alteration to a
user's direction in order to continue following a route (e.g. a
fork in the road, a slight left turn, a u-turn, jumping to an upper
tier of a structure in the case of parkour, etc.). As an aside,
also note that in some implementations, a non-verbal audio cue may
also be associated with an instruction for the user to e.g.
continue going straight such as at an road intersection.
[0045] Regardless, if the logic determines that a turn is not
upcoming (e.g. not within a predefined threshold distance for turns
set by a user prior to embarking on the exercise), the logic
proceeds to block 77 where logic continues monitoring the user's
exercise and continues presenting music matched to the user's
cadence in accordance with present principles. If, however, the
logic determines that a turn is upcoming, the logic instead
proceeds to block 78 where the logic notifies and/or cues the user
of how to proceed at least using at least one non-verbal audio
cue.
[0046] For instance, a single beeping sound may be associated with
a left turn (e.g. the user has preset the single beep to be
associated with a left turn) while a double beeping sound may be
associated with a right turn (e.g., the user having preset the
double beep as well). In addition to or in lieu of the foregoing,
should the user be wearing head phones such as the ones described
above, the non-verbal cue may be presented in the left ear piece
(only, or more prominently/loudly) to indicate a left turn should
be made, and the right ear piece (only, or more prominently/loudly)
to indicate a right turn should be made. In addition to or in lieu
of the foregoing, other non-verbal cues that may be presented to a
user e.g. in ear pieces in accordance with present principles are
haptic non-verbal cues and/or vibrations such that e.g. a
non-verbal vibration cue (e.g. the ear piece(s) vibrates based on a
vibrator located in each respective ear piece that is in
communication with the CE device's processor) may be presented on
the left ear piece (only, or more prominently) to indicate a left
turn should be made, and the right ear piece (only, or more
prominently) to indicate a right turn should be made.
[0047] Also in addition to or in lieu of the foregoing, if desired
the non-verbal audio cue may be accompanied (e.g. immediately
before or after the non-verbal audio cue) by a verbal cue such as
an instruction to "turn left at the next street." Also note that
the non-verbal audio cue need not be a single or double beep and
that other non-verbal audio cues may be used that themselves
indicate detailed information such as e.g. using an audible
representation of Morse code to provide turn information to a
user.
[0048] After block 78, the logic proceeds to block 80 where the
logic determines that another segment of the planned exercise/route
has begun, and accordingly presents music matching the
tempo/cadence of the user as he or she embarks on the next segment
(e.g. actual cadence, or desired cadence based on exercise
information determined by the user prior to embarking on the run).
As an example, the logic may determine at block 80 that the user
has transitioned from running on flat ground to running up a hill,
and accordingly presents music with a slower tempo relative to the
music presented while the user was on flat ground (e.g. and also
based upon segment settings set by a user where the user indicated
that a slower pace up the hill was desired relative to the user's
pace on flat ground). Conversely, if the user wished to "push it"
up the hill, music may be presented with a faster tempo than that
presented when the user was on flat ground, thereby assisting the
user with matching a running cadence to the music tempo to thus
proceed up the hill at a pace desired by the user (e.g. also based
on predefined settings by the user).
[0049] In any case, after block 80 the logic proceeds to decision
diamond 82, at which the logic determines whether a virtual
opponent, if the user manipulated the CE device to present a
representation of one while proceeding on the exercise, is
approaching or moving away from the user. For instance, the user
may set settings for a virtual opponent that represents the user's
minimum preferred average pace or speed at which to exercise, and
thus can determine based on the virtual opponent representation
whether the user's actual pace has slowed below the minimum average
pace based on a non-verbal audio cue including an up Doppler effect
(e.g. sound frequency increasing) thereby indicating that the
virtual opponent is approaching. Accordingly, the user can also
determine that the virtual opponent is receding (e.g. that the
"virtual" distance separating the user and the virtual opponent is
becoming larger) based on a non-verbal audio cue including a down
Doppler effect (e.g. sound frequency decreasing). Furthermore, from
increasing to decreasing and vice versa, the Doppler effect sound
may move from one earpiece of a headphone set to another (e.g. be
presented more prominently in one ear piece, then fade in that ear
piece and be increasingly more prominently presented in the other
ear piece) to further signify the position of the virtual opponent.
Also note that present principles recognize that such non-verbal
Doppler cues need not be presented constantly during the exercise
to indicate to the user where the virtual opponent is relative to
the user, and may e.g. only be presented to the user responsive to
a determination that the virtual opponent is within a threshold
distance of the user (e.g. as set by the user prior to embarking on
the exercise routine).
[0050] Still in reference to decision diamond 82, if the logic
determines that a virtual opponent is not approaching or moving
away from the user (e.g., the pace of the user and the "virtual"
pace of the virtual opponent are identical or nearly identical,
and/or the virtual opponent is not within a threshold distance to
present any indication to the user of the location of the virtual
opponent), the logic may revert back to decision diamond 76 and
continue from there. If, however, the logic determines that a
virtual opponent is approaching or moving away from the user in
accordance with present principles, the logic moves to block 84
where at least one non-verbal audio cue that the virtual opponent
is approaching or moving away from the user is presented on the CE
device. Thereafter, the logic may revert from block 84 to decision
diamond 76 and proceed from there.
[0051] Before moving on to FIG. 3, note that the non-verbal audio
cue indicating the position of the virtual opponent may be
accompanied by (e.g. presented concurrently with, before, and/or
after) a verbal audio cue indicating the position of the virtual
opponent. For example, the non-verbal Doppler effect sounds may be
accompanied by a verbal indication that "the virtual opponent is
approaching."
[0052] Also before moving on to FIG. 3, it is to be understood that
e.g. planned exercise information that is received by the logic may
include an (e.g. predefined) exercise segment time period (e.g. ten
minutes), and the non-verbal cue may thus be and/or include a music
segment (e.g. a music file or portion thereof) having a time period
of substantially the exercise segment time period to e.g. inform
the user of the time remaining for that particular segment. Thus,
in some implementations the music segment may begin at
substantially the start of the exercise segment time period and end
at substantially the end of the exercise segment time period.
[0053] Continuing the detailed description in reference to FIG. 3,
another example flowchart of logic to be executed by a CE device
such as the CE device 12 in accordance with present principles is
shown, this time for creating a playlist of music matching a user's
cadence. It is to be understood that the logic of FIG. 3 (and/or
FIG. 4) may be combined with FIG. 2 in some implementations, and/or
executed concurrently therewith. Regardless, the logic of FIG. 3
begins at block 90 where the logic receives exercise information in
accordance with present principles. The logic then proceeds to
block 92 where the logic receives one or more biometric signals
from one or more biometric sensors in communication with the CE
device as set forth herein. The logic then proceeds to block 94
where the logic accesses music metadata indicating a music tempo
for each of one or more music files for matching of the user's
cadence with at least one music having at least a substantially
similar tempo in accordance with present principles. Thereafter,
the logic proceeds to block 96 where the logic establishes a
playlist including one or more music files of music having a tempo
matching a desired cadence, actual cadence, etc. of the user. Also
at block 96 the logic begins presenting the music of the
playlist.
[0054] After block 96, the logic proceeds to decision diamond 98
where the logic determines whether the user's cadence has changed
(e.g. actual cadence, and/or estimated based on the transition from
one exercise segment to another based on time and/or location such
as beginning to proceed up a hill). If the logic determines at
diamond 98 that the user's cadence has not changed, the logic
proceeds to block 100 where the logic continues presenting music
from the playlist of music of the same tempo or substantially
similar tempo. If, however, the logic determines at diamond 98 that
the user's cadence has changed, the logic instead proceeds to
decision diamond 102 where the logic determines whether a biometric
parameter of a user has exceed a threshold, or is below a
threshold, depending on the particular parameter, acceptable health
ranges, user settings, etc. For instance, if the user's heart rate
exceeds a heart rate threshold, that could be detrimental to the
user's heart and the user may thus wish to be provided with a
notification in such a case. As another example where a
notification may be appropriate, if the user's core body
temperature exceeds a temperature threshold (e.g. the user is too
hot) or even falls beneath a threshold (e.g. the user is too cold),
that could be detrimental to the user's brain and thus a
notification of the user's temperature would be beneficial.
[0055] In any case, should the logic determined that at least one
biometric parameter does not exceed a threshold or is not below
another threshold (e.g. the biometric parameter is within an
acceptable range, healthy range, and/or user-desired range as input
to the CE device by the user), the logic proceeds to block 100 and
may subsequently proceed from there. If, however, the logic
determines that a threshold has been breached, the logic instead
moves to block 104 where the logic instructs the user to speed up
the user's cadence/pace and/or slow down as may be appropriate
depending on the biometric parameter to be brought within an
acceptable range. Also note that at block 104 should the biometric
parameter be dangerous to the user's health (e.g. based on a data
table correlating as much), the logic may instead instruct the user
to stop exercising completely and/or automatically without user
input provide a notification to an emergency service along with
location coordinates from a GPS receiver on the CE device.
[0056] Regardless, after block 104 the logic proceeds to block 106
where the logic changes or alters the playlist (and even entirely
replaces the previous playlist) to include music with a tempo
matchable to bring the user's biometric parameter within an
acceptable range. For example, if the logic determines that a
biometric parameter exceeds a threshold, and thus that a user needs
to slow down, the playlist may be altered to present (e.g., from
that point on) music with a slower tempo than was previously
presented. Then after block 106 the logic may revert back to
decision diamond 98 and proceed again from there. For completeness
before moving on to FIG. 4, also note that based on a positive
determination at decision diamond 98, in other exemplary instances
the logic may proceed directly to block 106 where the playlist is
changed to match the user's current cadence, which has changed
according to a positive determination made at diamond 98.
[0057] Now in reference to FIG. 4, another example flowchart of
logic to be executed by a CE device such as the CE device 12 in
accordance with present principles is shown, again for presenting
music with a tempo to match a user's cadence but this time based on
a change in time and e.g. thus transition from one exercise segment
to another. The logic of FIG. 4 begins at block 110 where the logic
receives exercise information in accordance with present
principles. The logic then proceeds to block 112 where the logic
begins presenting music with a first tempo (e.g. first beat speed)
for a first time to match a user's actual and/or desired cadence in
accordance with present principles (e.g. after a user begins an
exercise routine). The logic then proceeds to decision diamond 114
where the logic determines if the first (e.g. preset) time has
expired at which the user was to exercise at the first tempo. Thus,
it is to be understood that the first time, and indeed subsequent
times, may be predefined by a user as input to the CE device prior
to beginning the exercise routine. For instance, the user may
provide input to the CE device to provide music of a certain tempo
for ten minutes so that a user can match his or her cadence
thereto, then present music of a relatively faster tempo for twenty
minutes thereafter so that a user can increase his or her pace
after ten minutes of warming up at a slower pace.
[0058] In any case, if the logic determines at diamond 114 that the
first time has not expired, the logic proceeds to block 116 where
the logic continues presenting music at the same tempo as prior to
the determination. If, however, the logic determines at diamond 114
that the first time has expired, the logic instead proceeds to
block 118 where the logic presents music with a second tempo (e.g.
second beat speed different than the first) for a second time to
match a user's actual and/or desired cadence for the second time in
accordance with present principles. The logic then proceeds to
decision diamond 120 where the logic determines if the second time
has expired at which the user was to exercise at the second tempo.
If the logic determines at diamond 120 that the second time has not
expired, the logic may proceed to block 116. If, however, the logic
determines at diamond 120 that the second time has expired, the
logic instead proceeds to block 122 where the logic presents music
with a third tempo (e.g. a third beat speed different than the
first and second beat speeds, or just different than the second
beat speed) for a third time to match a user's actual and/or
desired cadence for the third time in accordance with present
principles.
[0059] Continuing the detailed description in reference to FIG. 5,
it shows an example flowchart of logic to be executed by a server
for providing music to a CE device with a tempo to match a user's
cadence in accordance with present principles. The server logic of
FIG. 5 begins at block 130 where the logic receives a request to
access a user's account (e.g. such as a cloud storage account
stored on the server). Assuming successful authentication of the CE
device with the cloud account, access to the account is also
provided at block 130 by the server. The logic then proceeds to
block 132 where the logic receives tempo and/or cadence information
(e.g. based on input from a biometric sensor on the CE device
providing the information) for which music with a corresponding at
least substantially similar tempo is to be matched. The logic then
proceeds to block 134 where the logic locates and/or otherwise
determines music files stored on the server that comport with the
received tempo information. Note that at block 134, the music files
that match the received tempo data may be determined as set forth
herein (e.g. using music file metadata), and may be selected from
locations including the user's cloud storage on the server but also
or in lieu of that, music in the public domain and/or music
provided over e.g. a general publically available music piece
library and/or an Internet radio service. These music sources may
be used or may not be used depending on e.g. settings set by a user
at the CE device and manipulating a user interface in accordance
with present principles.
[0060] In any case, after block 134 the logic proceeds to block 136
where the logic provides (e.g., streams) the music to the CE
device, along with providing any corresponding purchase information
for music files being provided e.g. that the user does not already
own and/or is not in the user's cloud storage (e.g. based on
determinations that the user does not own the music e.g. by
searching the user's storage areas for the piece of music), such as
music provided using an Internet radio service. The logic then
proceeds to decision diamond 138 where the logic determines whether
input has been received that was input at the CE device and
transmitted to the server that indicates one or more music files
have been designated (e.g., "bookmarked" by manipulating a user
interface on the CE device and/or providing an audible command
thereto) for purchase by the user. For instance, the user may want
to designate a song for later purchasing so the user does not
forget the details of the song he or she wished to purchase and
hence cannot locate it later, but at the same time does not wish to
complete all necessary purchase steps while still exercising such
as e.g. providing credit card information.
[0061] If the logic determines at decision diamond 138 that no
input has been received to designate one or more music files for
later purchasing, the logic proceeds to block 140 where the logic
stores data indicating the music files provided to the CE device so
that the same music files may be presented again at a later time
should the user elect to do so by manipulating the user's CE
device. Also at block 140 the logic may store any and/or all
biometric information it has received from the CE device (e.g. for
access by the user's physician to determine the user's health
status or simply to maintain biometric records in the user's cloud
storage). Referencing decision diamond 138 again, if the logic
determines thereat that input has been received to designate one or
more music files for later purchasing, the logic moves to block 142
where it stores data indicating as much for later access by the
user to use for purchasing the music (e.g. creates a "bookmark"
file indicating the music files designated for purchase).
Concluding the description of FIG. 5, note that after block 142 the
logic may proceed to block 140.
[0062] Continuing the detailed description in reference to FIG. 6,
an exemplary user interface (UI) 150 configured for receiving input
(e.g. touch input to a touch-enabled display presenting the UI 150)
from a user to configure settings of a CE device in accordance with
present principles is shown. The UI 150 includes a first setting
152 for configuring the CE device to match song lengths with
workout segments (e.g. a set of crunches) and/or exercise route
segments, and thus includes yes and no selector elements 154 for
providing input on whether or not, respectively, the CE device is
to match songs with segments. Also shown on the UI 150 is a second
setting 156 for whether the CE device should provide virtual
coaching instructions in accordance with present principles, and
includes yes or no selector elements 158 for providing input on
whether or not, respectively, the CE device should provide virtual
coaching.
[0063] In addition to the foregoing, the UI 150 may include a
non-verbal cue section 160. The section 160 may include left and
right turn settings 162, 164, with respective input fields 166, 168
for inputting a user-specified number of beeps (e.g. relatively
high-pitched sounds separated by periods of no sound) that are to
be provided to the user while proceeding on an exercise route to
instruct the user where to turn in accordance with present
principles. Also note that the settings 162, 164 include respective
selector elements 170, 172 that are selectable to cause another UI
and/or a window overlay to be presented for selecting from other
available sounds other than the "beeps" that may be used to
indicate turns, and indeed it is to be understood that different
sounds may be used to indicate turns in addition to or in lieu of
differing sound sequences.
[0064] The UI 150 also includes a setting 174 for a user to provide
input using the yes or no selectors 176 regarding whether e.g.
non-verbal turn cues should be presented in only the ear piece
corresponding to the direction of the turn. For instance, a right
turn non-verbal cue would only be presented in the right earpiece,
whereas a left turn non-verbal cue would only be presented in the
left earpiece of headphones. A race virtual opponent setting 178
may also be included in the UI 150 and includes yes and no selector
elements 180 for a user to provide input on whether the user wishes
to have virtual opponent data (e.g. indications of the location of
the virtual opponent represented as non-verbal audio Doppler cues)
presented on the CE device in accordance with present principles.
Last, note that a submit selector 182 may be presented for
selection by a user for causing the CE device to be configured
according to the user's selections as input using the UI 150.
[0065] Turning now to FIG. 7, an exemplary UI 190 for configuring
gesture and/or voice control settings in accordance with present
principles is shown. The UI 190 includes a faster beat setting 192,
which includes gesture command selections 194 and voice command
selections 196 each for different gesture and voice command options
to provide input to the CE device to present a song with a faster
beat than one being currently presented. Note that one or more of
the selections for each of the gesture and voice commands may be
selected, if desired, though e.g. the CE device may prevent
selection of the same specific command for requesting both a faster
beat and a slower beat (e.g. the same hand gesture could not be
used for requesting a song with a faster beat and a slower beat).
In any case, the UI 190 also includes a slower beat setting 198,
which includes gesture command selections 200 and voice command
selections 202 each for different gesture and voice command options
to provide input to the CE device to present a song with a slower
beat than the one currently being presented.
[0066] In addition to the foregoing, the UI 190 may also include an
exercise machine configuration setting 204 for providing input to
the CE device for whether the CE device is to change exercise
machine configurations for an exercise machine (e.g. increasing or
decreasing resistance, speed, incline or decline, etc.) being used
by the user and in communication with the CE device (e.g., using
NFC, Bluetooth, a wireless network, etc.) based on the user's
biometrics and even e.g. user-defined settings for targeted and/or
desired biometrics for particular exercises and/or user-defined
settings for safe ranges of biometrics. For example, if the user
indicated that he or she wished their heart rate to average a
particular beats per minute, the CE device may configure the
exercise machine to increase or decrease its e.g. speed or
resistance to bring the user's actual heart rate into conformance
with the desired heart rate input by the user to the CE device.
Thus, the setting 204 includes yes and no selector elements 206 for
providing input to the CE device to command the CE device to change
exercise machine configurations accordingly or not, respectively.
Also note that the UI 190 also includes a select machine selector
element 208 for selecting an exercise machine to be communicatively
connected to and configured by the CE device (e.g. by presenting
another UI or overlay window for machine selection) and also a pair
using NFC selector element 210 that is selectable to configure the
CE device to communicate with the exercise machine automatically
upon close juxtaposition of the two (e.g. juxtaposition of
respective NFC elements) to exchange information for the CE device
to command and/or configure the exercise machine in accordance with
present principles.
[0067] Moving from FIG. 7 to FIG. 8, it shows an exemplary tempo
matching settings UI 220 including plural settings for matching a
user's cadence and/or heart rate with music of at least
substantially the same tempo in accordance with present principles.
The UI 220 includes at least a first setting 222 for matching tempo
based on one or more biometric parameters, and accordingly includes
a selection box 224 for a user to select one or more particular
biometric parameters for such purposes. A second setting 226 is
also shown for selecting one or more genres of music from which
music will be selected by the CE device for presentation when being
matched to a biometric parameter, and accordingly includes a
selection box 228 for a user to select one or more music genres for
such purposes. A third setting 230 is also shown for selecting one
or more moods of the user which the CE device is to (e.g.
intelligently) match with music of a corresponding mood, the music
also including a matching tempo in accordance with present
principles, and accordingly setting 230 includes a selection box
232 for a user to select one or more moods that the user is feeling
for such purposes. A fourth setting 234 is included on the UI 220
as well, the setting 234 being for selecting one or more musical
artists associated with music pieces to be selected by the CE
device for presentation when being matched to a biometric
parameter, and accordingly includes a selection box 236 for a user
to select one or more artists for such purposes. Yet a fifth
setting 238 may be presented for selecting one or more previous
exercise routine and/or workout music playlists that were
previously presented in accordance with present principles from
which music may be selected for the current exercise routine (e.g.,
if the CE device determines that the music from the previous
playlist has a beat matching one or more current biometric
parameters), and accordingly includes a selection box 240 for
selecting one or more previous exercise routine playlists for such
purposes.
[0068] It is to be understood that still other settings may be
configured using the UI 220, such as a setting 242 for matching
music using the likes and/or preferences of social networking
friends, and accordingly includes respective yes and no selector
elements 244 for providing input to the CE device for whether to
match music to be presented with one or more biometric parameters
based on likes from the user's social networking friends. Note that
e.g. the CE device may be configured to access one or more of the
user's social networking services (e.g. based on username and
password information provided by the user), to parse data in the
social networking service, and make correlations between social
networking posts and e.g. track names (e.g. from a database of
track names) for musical tracks to thereby identify music that is
"trending" or otherwise "liked" by the user's friends. Still
another setting 246 may be presented for matching music in
accordance with present principles by using music that is currently
popular based on e.g. Billboard ratings, total sales on an online
music providing service, currently trending even if on a social
networking site of which the user is not a member, etc., and
accordingly includes yes and no selectors 248 for providing input
to the CE device for whether to match music in accordance with
present principles using currently popular music. The UI 220 may
also include a cloud storage setting 250 with a cloud selector
element 252 and a local storage selector element 254 that are both
selectable by the user to provide input to the CE device for
different storage locations from which the CE device may gather
and/or stream music to be presented in accordance with present
principles. Thus, selecting the selector element 252 configures the
CE device to gather music from the user's cloud storage account,
and selecting the selector element 254 configures the CE device to
gather music from the CE device's local storage area, and indeed
either or both of the selector elements 252, 254 may be selected.
The UI 220 may include still another setting 256 with yes and no
selectors 258 for providing input to the CE device on whether to
instruct a server to insert recommended music into a playlist
and/or sequence of music to be presented during the exercise
routine, including e.g. Internet radio music, sponsored music,
music determined by the processor as being potentially likeable by
the user (e.g. based on genre indications input by the user,
similar music already owned by the user, etc.), music not owned by
the user but nonetheless comporting with one or more other settings
of the UI 220 (such as being from a genre from which the user
desires music to be presented), etc.
[0069] Still in reference to FIG. 8, in addition to the foregoing,
the UI 220 may include a bookmark music setting 260 for configuring
the CE device to receive commands to designate one or more pieces
of music that are presented during a workout routine for purchasing
at a later time after the workout concludes. Thus, a gesture
selector element 262 is selectable to configure the CE device to
receive a (e.g. predefined) gesture command to designate music
accordingly, as well as an audible command selector element 264
selectable to configure the CE device to receive an (e.g.
predefined) audible command to designate music for purchasing, and
even an entire playlist selector element 266 that is selectable to
configure the CE device to at a time after conclusion of the
workout present a listing (e.g. playlist) of all music pieces that
were presented to the user during the workout routine and from
which a user may select one or more music pieces for purchasing.
Note that in some embodiments, selection of the selector elements
262, 264 may automatically without further user input configure the
CE device to present another UI and/or an overlaid UI for a user to
specify one or more particular gestures and/or audible commands
that are to be associated by the CE device as being a command(s) to
designate/bookmark a particular piece of music when that particular
piece of the music is presented in accordance with present
principles. Thus, for instance, should a particular gesture be
designated as a command when detected by the CE device to bookmark
the music piece, the CE device upon receiving the command may set a
flag and/or data marker for the music to be identified at a later
time and presented to the user as being previously bookmarked, and
that in such instances the CE device need not present e.g. an
audible or visual indication of bookmarking upon receiving the
command that the piece of music is to be bookmarked (although in
some implementations e.g. brief audible feedback such as a chime
sound may be presented to indicate to the user that the CE device
received the bookmark command and did indeed "bookmark" the piece
of music for later purchasing).
[0070] Still in reference to the UI 220, a skipping music setting
268 is shown for skipping a piece of music the user does not like
(e.g. if recommended to the user during an exercise routine). Thus,
a gesture selector element 270 and a audible selector element 272
are both selectable for configuring the CE device to skip a piece
of music being presented responsive to receiving a (e.g.
predefined) gesture or audible command, respectively, indicating as
much. Note further that each of the selector elements 270, 272 may
be selectable configure the CE device to present another UI and/or
an overlaid UI for a user to specify one or more particular
gestures and/or audible commands that are to be associated by the
CE device as being a command(s) to skip a piece of music in
accordance with present principles.
[0071] Concluding the description of FIG. 8, the UI 220 also
includes a share selector element 274 selectable to configure the
CE device to automatically post, publish, and/or share, etc., over
one or more social networking services the piece(s) of music and/or
music playlist presented to the user while exercising upon
completion of the exercise routine, it being understood that the CE
device may also be configured to present on a display of the CE
device the playlist e.g. after the workout routine has been
completed, including presentation of music metadata and music
tempos. A submit selector element 276 for submitting the user's
selections of settings in accordance with present principles.
[0072] Now in reference to FIG. 9, another UI 280 is shown for
presenting current biometric information, music information, etc.
while engaged in an exercise routine. It is to be understood that
the UI 280 may thus be presented on the display of an exercise
machine for viewing by the user while using the machine, and/or on
the user's personal CE device that is in communication with the
exercise machine. In any case, the UI 280 includes a music
information section 282 including various pieces of information
about a piece of music currently being presented that was matched
by the CE device with one or more of the user's biometric
parameters in accordance with present principles. As may be
appreciated from the section 282, the music information may include
e.g. artist name, track title of the song, album of the song,
duration of the song, who owns the song (e.g. the user and stored
locally on the CE device, and/or a third party music provider
streaming the music to the CE device), an indication of the
popularity of the music and even a particular demographic with
which the music is popular (e.g. in the present instance the song
is popular based on "like" indications by five kilometer runners
input at their respective CE devices, and in other instances
popular and/or recommended music from a user's personal trainer
monitoring exercise plans and observing biometric information
collected by the CE device in accordance with present principles),
and an indication of the beats per minute of the song. Note that
although the CE device may access music e.g. using its own network
interface to access a cloud storage area of the user, in addition
to or in lieu of that the exercise machine may itself access a
storage area storing music and then e.g. stream the music from the
exercise machine to the user's headphones (e.g. using NFC
pairing).
[0073] In addition to the foregoing, the UI 280 also includes a
biometric parameter section 284 for presenting one or more pieces
of information related to the user's biometric parameters as
detected by one or more biometric sensors such as those described
above in reference to FIG. 1. For instance, information that may be
presented includes heart rate information, cadence information,
and/or breathing information.
[0074] Furthermore, the UI 280 may include a prompt 286 for a user
to provide input using yes and no selectors 288 while a piece of
music is being currently presented during the exercise routine to
easily bookmark the piece of music for later purchasing (e.g., one
touch bookmarking). The UI 280 includes a second prompt 290 for a
user to provide input using yes and no selectors 292 while a piece
of music is being currently presented during the exercise routine
to automatically without further user input store the particular
piece of music in the user's cloud storage once purchased or if
purchasing is not necessary. Last, an option 294 is presented on
the UI 280 for whether to change exercise machine configurations
manually using yes and no selectors 296, and thus e.g. selection of
the yes selector from the selectors 296 may cause another UI to be
presented and/or overlaid that includes exercise machine settings
configurable by a user to configure the exercise machine. This may
be desirable when e.g. the CE device automatically configures the
exercise machine according to one or more biometric parameters in
accordance with present principles but the user nonetheless wishes
to manually override the automatic configuration.
[0075] Moving on in the detailed description with reference to FIG.
10, an exemplary illustration 300 that illustrates present
principles is shown. As may be appreciated from the caption boxes
of FIG. 10, a user and a CE device in accordance with present
principles are audibly exchanging information and indeed the CE
device is audibly providing a "virtual coach" to provide (e.g.
intelligently determined) encouragement to the person shown in the
illustration 300 and even encouragement based on e.g. biometric
data. Another illustration 302 is shown in FIG. 11 including a
graph 304 indicating the various segments of a user's workout
routine represented in terms of heartbeats per minute over time,
and also shows thumbnails 306 sequentially arranged from first
music presented to last music presented, where each one is
respectively associated with a piece of music presented during the
exercise routine and matched to the user's one or more biometric
parameters, and/or an album from which the piece of music was
selected. Note that a caption 308 is also shown that indicates an
example of audio feedback that may be presented by the CE device
during a "cool down" exercise stage, identifies the song, and/or
provides instruction on how to bookmark the music (e.g. for later
purchasing and/or listening). In the present exemplary instance the
CE device may bookmark the music (e.g. and may also store bookmark
information locally on the CE device's storage medium) responsive
to a single tap input by the person to a particular area of a
touch-enabled display of the CE device or any touch-enabled area,
and furthermore a double tap input to a particular area of a
touch-enabled display of the CE device or any touch-enabled area
may be provided by the user to skip the song being presented and
cause the CE device to automatically without further user input
provide another song matching the user's biometric parameter(s)
and/or cool down phase of the exercise routine.
[0076] With no particular reference to any figure, it is to be
understood that in accordance with present principles, the CE
devices disclosed herein may be configured in still other ways to
match music with one or more biometric parameters. For instance,
when determining whether a biometric parameter conforms to at least
a portion of planned physical activity information, such
determining may be executed e.g. periodically at a predefined
periodic interval, where responsive to the determination that the
biometric parameter does not conform to at least a portion of
planned physical activity information, the CE device may
automatically present an audio indication in accordance with
present principles by altering the time scale of a music file being
presented on the CE device. E.g., rather than presenting an
entirely different piece of music to the user, the CE device may
digitally stretch or compress the currently presented music file to
thereby adjust the beats per minute as presented to the user in
real time. Thus, time stretching of the music file may be
undertaken by the CE device, as may resampling of the music file to
change the duration and hence beats per minute.
[0077] In reference to the automated and/or virtual coaching
discussed herein, it is to be understood that the CE device may
present such information when the user configures settings for it
to do so (e.g. using a UI such as the ones described above).
Virtual coaching may include notifying a user when the user is
transitioning from one exercise segment to another (e.g. based on
GPS data accessible to the CE device while on an exercise route).
For instance, the virtual coach may indicate, "You are starting to
proceed up a hill, which is segment three of your exercise." Other
instructions that may be provided by a virtual coach include, e.g.,
at the beginning of an exercise routine, "Starting your ride now,"
and "At the fork in the road ahead, turn right." Also at the
beginning of the workout and assuming the user has not already
provided input to the CE device instructing the CE device to
present a virtual opponent in accordance with present principles,
the CE device may provide an audio prompt at the beginning of the
exercise routine asking whether the user wishes to race a virtual
opponent (e.g., "Would you like to race against a virtual
opponent?"), to which e.g. the user may audibly respond to in the
affirmative as recognized by the CE device processor using natural
language voice recognition principles.
[0078] As other examples of indications that may be made by a
"virtual coach" using the CE device, the CE device may indicate
after conclusion of an exercise routine how much time, distance,
and/or speed by which the user beat the virtual opponent. Also
after conclusion of the routine, the CE device may e.g. audibly
(and/or visually) provide statistics to the user such as the user's
biometric readings, the total time to completion of the exercise
routine, the distance traveled, etc. Even further, the CE device
may just before conclusion of the exercise routine provide an
audible indication that the routine is almost at conclusion by
indicating a temporal countdown until finish such as, "Four, three,
two, one . . . finished!"
[0079] Referring specifically to gestures in free space that are
recognizable by the CE device as commands to the CE device in
accordance with present principles, note that not only may a user
e.g. skip a song or request a song with a faster or slower pace
based on gestures in free space detected by a motion/gesture
detector communicating with the CE device, but may also e.g., pause
a song if the user temporarily stops an exercise. For instance, if
while proceeding on an exercise route the user happens upon a
friend also walking therealong, the user may provide a gesture in
free space predefined at the CE device as being a command to stop
presenting music (and/or tracking biometric data) until another
gesture command is received to resume presentation of the
music.
[0080] Now in reference to the music, music files, songs, etc.
described herein, present principles recognize that although much
of the present specification has been directed specifically to
music-related files, present principles may apply equally to any
type of audio file and even e.g. audio video files as well (e.g.,
presenting just the audio from an audio video file or presenting
both audio and video). Furthermore and in the context of a music
file, the metadata for music files described herein may include not
only beats per minute and music genre but still other information
as well such as e.g., the lyrics to the song.
[0081] Present principles also recognize that although much of the
specification has been directed specifically to exercise routines,
present principles may apply not only to exercising but also e.g.
sitting down at a desk, where the CE device can detect e.g. using a
brain activity monitor and blood pressure monitor that a user is
stressed and thus suggests and/or automatically presents calming
music to the user.
[0082] Notwithstanding, present principles as applied to exercising
recognize that the following are exemplary audible and/or visual
outputs by the CE device in accordance with present principles:
[0083] "Different song to get going?", which may be presented
responsive to a determination that the user is not keeping up a
pace input by the user as being the desired pace.
[0084] "You are slowing down, want a different song?", which may be
presented responsive to a determination that the user is beginning
to slow down his or her pace (e.g. gradually but falling outside
the predefined desired pace).
[0085] "Run until end of song," which may be presented responsive
to a determination that the user is about to come to the end of an
exercise segment or the exercise routine in totality, and hence the
end of the current song signifies the end of the segment and/or
routine.
[0086] "Increase activity for next minute," which may be presented
responsive to a determination that the user needs to exercise
faster for the next minute to comport with e.g. a predefined
exercise goal. Such CE device feedback may also be provided e.g.
for the user to gradually increase their tempo/cadence as a workout
progresses from a lower intensity segment to a higher intensity
segment.
[0087] "Your heart rate is one hundred two beats per minute," which
may be presented responsive to a determination that a user has
input a command during an exercise routine requesting biometric
information for heart rate.
[0088] Present principles also recognize that more than one CE
device may be provide e.g. non-verbal audio cues to set a
pace/cadence for respective users exercising together. For example,
two or more people may wish to exercise together but do not wish to
listen to the same music. The users' CE devices may communicate
with each other and e.g. based on a predefined cadence/tempo
metadata that is exchanged therebetween (e.g. based on a desired
cadence indicated by a user prior to the workout routine) different
songs with the same beats per minute matching the predefined
cadence may be presented on each respective CE device so that the
users may establish the same pace albeit with different music.
[0089] Moving on, it is to be understood that e.g. after conclusion
of an exercise routine, the user may not only share the user's
exercise routine over a social networking service but may also e.g.
provide the exercise data to a personal trainer's CE device (e.g.
using a commonly-used fitness application) so that the personal
trainer may evaluate the user and view exercise results, biometric
information, etc.
[0090] Describing changes in cadence/tempo of a user, it is to be
understood that should the user break stride, the CE device
although detecting as much may not automatically change songs to
match the new cadence but in some implementations may e.g. wait for
the expiration of a threshold time at which the user runs at the
new cadence, thereby not changing songs every time the user
accidentally breaks pace and instead changing songs once the user
has intentionally established a new pace.
[0091] Describing the non-verbal cues with more specificity, note
that e.g. the CE devices described herein may be configured to
dynamically without user input change from providing verbal cues to
only providing non-verbal cues in some instances when e.g., after a
threshold number of times making the same turn or otherwise
exercising on the same route, the CE device determines that only
non-verbal cues should be presented. This may be advantageous to a
user who is already familiar with a neighborhood in which the user
is exercising and hence does not necessarily need verbal cues but
may nonetheless wish to have non-verbal ones presented that do not
audibly interfere with the user's music as much as the verbal cues.
Such determinations may be made e.g. at least in part by storing
GPS data as the user proceeds along the route each time it is
traveled which at a later time may be analyzed to determine whether
the threshold number of times has been met.
[0092] Present principles further recognize that although some of
the specification describes CE device features in reference to e.g.
running or cycling, present principles may apply equally to other
instances as well such as e.g. swimming or any other exercises
establishing repetitive/rhythmic exercise motions.
[0093] Last, note that the headphones described herein may be
configured to e.g. undertake active noise reduction on ambient
noise present while exercising, while still allowing "transient"
sounds like the sound generated by passing cars or someone talking
to the exerciser to be heard by the exerciser. This headphone
configuration thus promotes safety but still allows for clearly
listening to music without unwanted ambient noises interfering with
the user's listening enjoyment.
[0094] While the particular PRESENTING AUDIO BASED ON BIOMETRIC
PARAMETERS is herein shown and described in detail, it is to be
understood that the subject matter which is encompassed by the
present invention is limited only by the claims.
* * * * *