U.S. patent application number 13/623545 was filed with the patent office on 2013-12-26 for multisensor hearing assist device for health.
This patent application is currently assigned to BROADCOM CORPORATION. The applicant listed for this patent is BROADCOM CORPORATION. Invention is credited to James D. Bennett, John Walley.
Application Number | 20130343585 13/623545 |
Document ID | / |
Family ID | 49774491 |
Filed Date | 2013-12-26 |
United States Patent
Application |
20130343585 |
Kind Code |
A1 |
Bennett; James D. ; et
al. |
December 26, 2013 |
MULTISENSOR HEARING ASSIST DEVICE FOR HEALTH
Abstract
A hearing assist device is associated with an ear of a user.
Health characteristics of the user are measured by sensors of the
hearing assist device. The measured health characteristics may be
analyzed in the hearing assist device, or transmitted from the
hearing assist device for remote analysis. Based on the analysis,
alerts, instructions, and other information may be displayed to the
user in the form of text or graphics, or may be played by the
hearing assist device in the form of sound/voice. Medical personnel
may be alerted when problems with the user are detected by the
hearing assist device. The user may provide verbal commands to the
hearing assist device. The hearing assist device may be configured
to filter sounds, and may be configured for a personal hearing
frequency response of the user. The hearing assist device may be
configured for speech recognition to recognize commands.
Inventors: |
Bennett; James D.;
(Hroznetin, CZ) ; Walley; John; (Ladera Ranch,
CA) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
BROADCOM CORPORATION |
Irvine |
CA |
US |
|
|
Assignee: |
BROADCOM CORPORATION
Irvine
CA
|
Family ID: |
49774491 |
Appl. No.: |
13/623545 |
Filed: |
September 20, 2012 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
61662217 |
Jun 20, 2012 |
|
|
|
Current U.S.
Class: |
381/315 ;
381/312; 381/317; 381/323 |
Current CPC
Class: |
H04R 2460/13 20130101;
H04R 2225/55 20130101; H04R 2460/03 20130101; H04R 25/554 20130101;
H04W 4/80 20180201; H04R 2225/43 20130101; A61J 1/00 20130101 |
Class at
Publication: |
381/315 ;
381/312; 381/323; 381/317 |
International
Class: |
H04R 25/00 20060101
H04R025/00 |
Claims
1. A hearing assist device that is mounted in association with an
ear of a user, the user being supported by a second device, the
hearing assist device comprising: a medical sensor configured to
both sense a characteristic of the user and to generate a sensor
output signal; processing logic configured to construct sensor data
based at least in part on the sensor output signal; a transceiver
configured to wireless communications with the second device, the
wireless communications to the second device comprising the sensor
data, and the wireless communications from the second device
comprising at least one command.
2. The hearing assist device of claim 1, further comprising: at
least one additional medical sensor configured to sense a
corresponding characteristic of the user and generate a
corresponding sensor output signal.
3. The hearing assist device of claim 1, further comprising: a
rechargeable battery; and a charging circuit configured to receive
a radio frequency signal and generate a charge current that charges
the rechargeable battery based on the received radio frequency
signal.
4. The hearing assist device of claim 1, further comprising: a
speaker; wherein the processing logic is configured to generate an
audio signal at least relating to the processed sensor data; and
wherein the speaker is configured to broadcast sound into the ear
of the user generated based on the audio signal.
5. The hearing assist device of claim 4, wherein the sound includes
at least one of a tone, a beeping sound, or a voice that includes
at least one of a verbal instruction to the user, a verbal warning
to the user, or a verbal question to the user.
6. The hearing assist device of claim 1, further comprising: a
microphone configured to receive environmental sound and generate
an audio signal based on the received environmental sound; wherein
the processing logic is configured to selectively favor one or more
frequencies of the audio signal to generate a modified audio
signal; and wherein the speaker is configured to receive the
modified audio signal and broadcast sound generated from the
modified audio signal into the ear of the user.
7. The hearing assist device of claim 6, wherein the processing
logic is configured to selectively favor the one or more
frequencies of the environmental sound to perform at least one of
noise cancelation or hearing loss compensation.
8. The hearing assist device of claim 6, further comprising: a
microphone configured to receive a voice of the user and generate
an audio signal based on the received voice; wherein the processing
logic is configured to generate an information signal based on the
audio signal; and wherein the transceiver is configured to transmit
the generated information signal to the second device.
9. The hearing assist device of claim 1, further comprising: a near
field communication (NFC) coil coupled to the transceiver.
10. A method in a hearing assist device that is mounted in
association with an ear of a user, comprising: receiving medical
sensor data based on sensor information determined by at least one
medical sensor of the hearing assist device; transmitting a first
communication related to the generated medical sensor data to a
second device; receiving a second communication that is generated
based on the first communication; and performing a function
identified by the second communication.
11. The method of claim 10, wherein said receiving a second
communication comprises: receiving additional sensor data
determined by the second device.
12. The method of claim 10, wherein the second device is a
supporting local device that relays information of the first
communication to a remote third device, the method further
comprising: analyzing the medical sensor data at the remote third
device.
13. The method of claim 10, further comprising: analyzing the
medical sensor data in the hearing assist device.
14. The method of claim 10, wherein the second communication is a
command relating to sensor data capture, and the command relating
to sensor data capture relates to at least one of a sensing
configuration or a hearing assist device configuration.
15. The method of claim 10, wherein the second communication
defines audio playback, the second communication includes audio
data for the audio playback, and the audio playback is configured
to prompt for user input.
16. The method of claim 10, wherein the supporting device is
configured to contact a third party via a third party device.
17. A hearing assist device that is mounted in association with an
ear of a user, comprising: a medical sensor configured to sense a
characteristic of the user and generate a sensor output signal;
processing logic configured to generate processed sensor data based
on the sensor output signal; and a speaker configured to receive a
voice audio signal related to the processed sensor data, and
broadcast voice from the speaker into the ear of the user based on
the received voice audio signal.
18. The hearing assist device of claim 17, further comprising: a
transceiver configured to wirelessly transmit the processed sensor
data to a second device, and wirelessly receive voice information
included in the voice audio signal from the second device in
response to the transmitted processed sensor data.
19. The hearing assist device of claim 17, further comprising:
storage configured to store a plurality of voice segments; wherein
the processing logic is configured to select a voice segment from
the storage and generate the voice audio signal based on the
selected voice segment.
20. The hearing assist device of claim 19, wherein the processing
logic is configured to modify at least one frequency characteristic
of the voice audio signal according to a hearing frequency response
of the user.
Description
[0001] This application claims the benefit of U.S. Provisional
Application No. 61/662,217, filed on Jun. 20, 2012, which is
incorporated by reference herein in its entirety.
BACKGROUND
[0002] 1. Field of the Invention
[0003] The present invention relates to hearing assist devices that
sense, analyze, and communicate user health characteristics.
[0004] 2. Background Art
[0005] Persons may become hearing impaired for a variety of
reasons, including aging and being exposed to excessive noise,
which can both damage hair cells in the inner ear. A hearing aid is
an electro-acoustic device that typically fits in or behind the ear
of a wearer, and amplifies and modulates sound for the wearer.
Hearing aids are frequently worn by persons who are hearing
impaired to improve their ability to hear sounds. A hearing aid may
be worn in one or both ears of a user, depending on whether one or
both of the user's ears need assistance.
BRIEF SUMMARY
[0006] Methods, systems, and apparatuses are described for hearing
assist devices that include health sensors, transmitters, and
receivers, as well as additional and/or alternative functionality,
substantially as shown in and/or described herein in connection
with at least one of the figures, as set forth more completely in
the claims.
BRIEF DESCRIPTION OF THE DRAWINGS/FIGURES
[0007] The accompanying drawings, which are incorporated herein and
form a part of the specification, illustrate embodiments and,
together with the description, further serve to explain the
principles of the embodiments and to enable a person skilled in the
pertinent art to make and use the embodiments.
[0008] FIG. 1 shows a communication system that includes a
multi-sensor hearing assist device that communicates with a near
field communication (NFC)-enabled communications device, according
to an exemplary embodiment.
[0009] FIGS. 2-4 show various configurations for associating a
multi-sensor hearing assist device with an ear of a user, according
to exemplary embodiments.
[0010] FIG. 5 shows a multi-sensor hearing assist device that
mounts over an ear of a user, according to an exemplary
embodiment.
[0011] FIG. 6 shows a multi-sensor hearing assist device that
extends at least partially into the ear canal of a user, according
to an exemplary embodiment.
[0012] FIG. 7 shows a circuit block diagram of a multi-sensor
hearing assist device that is configured to communicate with
external devices according to multiple communication schemes,
according to an exemplary embodiment.
[0013] FIG. 8 shows a flowchart of a process for a hearing assist
device that processes and transmits sensor data and receives a
command from a second device, according to an exemplary
embodiment.
[0014] FIG. 9 shows a communication system that includes a
multi-sensor hearing assist device that communicates with one or
more communications devices and network-connected devices,
according to an exemplary embodiment.
[0015] FIG. 10 shows a flowchart of a process for a wirelessly
charging a battery of a hearing assist device, according to an
exemplary embodiment.
[0016] FIG. 11 shows a flowchart of a process for broadcasting
sound that is generated based on sensor data, according to an
exemplary embodiment.
[0017] FIG. 12 shows a flowchart of a process for generating and
broadcasting filtered sound from a hearing assist device, according
to an exemplary embodiment.
[0018] FIG. 13 shows a flowchart of a process for generating an
information signal in a hearing assist device based on a voice of a
user, and transmitting the information signal to a second device,
according to an exemplary embodiment.
[0019] FIG. 14 shows a flowchart of a process for generating voice
based at least on sensor data to be broadcast by a speaker of a
hearing assist device to a user, according to an exemplary
embodiment.
[0020] FIG. 15 shows a system that includes a hearing assist device
and a cloud/service/phone portable device that may be
communicatively connected thereto, according to an exemplary
embodiment.
[0021] Embodiments will now be described with reference to the
accompanying drawings. In the drawings, like reference numbers
indicate identical or functionally similar elements. Additionally,
the left-most digit(s) of a reference number identifies the drawing
in which the reference number first appears.
DETAILED DESCRIPTION
I. Introduction
[0022] The present specification discloses numerous example
embodiments. The scope of the present patent application is not
limited to the disclosed embodiments, but also encompasses
combinations of the disclosed embodiments, as well as modifications
to the disclosed embodiments.
[0023] References in the specification to "one embodiment," "an
embodiment," "an example embodiment," etc., indicate that the
embodiment described may include a particular feature, structure,
or characteristic, but every embodiment may not necessarily include
the particular feature, structure, or characteristic. Moreover,
such phrases are not necessarily referring to the same embodiment.
Further, when a particular feature, structure, or characteristic is
described in connection with an embodiment, it is submitted that it
is within the knowledge of one skilled in the art to effect such
feature, structure, or characteristic in connection with other
embodiments whether or not explicitly described.
[0024] Furthermore, it should be understood that spatial
descriptions (e.g., "above," "below," "up," "left," "right,"
"down," "top," "bottom," "vertical," "horizontal," etc.) used
herein are for purposes of illustration only, and that practical
implementations of the structures described herein can be spatially
arranged in any orientation or manner.
[0025] Numerous exemplary embodiments are described as follows. It
is noted that any section/subsection headings provided herein are
not intended to be limiting. Embodiments are described throughout
this document, and any type of embodiment may be included under any
section/subsection. Furthermore, disclosed embodiments may be
combined with each other in any manner.
II. Example Hearing Assist Device Embodiments
[0026] Persons may become hearing impaired for a variety of
reasons, including aging and being exposed to excessive noise,
which can both damage hair cells in the inner ear. A hearing aid is
an electro-acoustic device that typically fits in or behind the ear
of a wearer, and amplifies and modulates sound for the wearer.
Hearing aids are frequently worn by persons who are hearing
impaired to improve their ability to hear sounds. A hearing aid may
be worn in one or both ears of a user, depending on whether one or
both of the user's ears need hearing assistance.
[0027] Opportunities exist with integrating further functionality
into hearing assist devices that are worn in/on a human ear.
Hearing assist devices, such as hearing aids, headsets, and
headphones, are typically worn in contact with the user's ear, and
in some cases extend into the user's ear canal. As such, a hearing
assist device is typically positioned in close proximity to various
organs and physical features of a wearer, such as the inner ear
structure (e.g., the ear canal, ear drum, ossicles, Eustachian
tube, cochlea, auditory nerve, etc.), skin, brain, veins and
arteries, and further physical features of the wearer. Because of
this advantageous positioning, a hearing assist device may be
configured to detect various characteristics of a user's health.
Furthermore, the detected characteristics may be used to treat
health-related issues of the wearer, and perform further
health-related functions. As such, hearing assist devices may be
used by users that do not even have hearing problems, but instead
may be used by these users to detect other health problems.
[0028] For instance, in embodiments, health monitoring technology
may be incorporated into a hearing assist device to monitor the
health of a wearer. Examples of health monitoring technology that
may be incorporated in a hearing assist device include health
sensors that determine (e.g., sense/detect/measure/collect, etc.)
various physical characteristics of the user, such as blood
pressure, heart rate, temperature, humidity, blood oxygen level,
skin galvanometric levels, brain wave information, arrhythmia onset
detection, skin chemistry changes, falling down impacts, long
periods of activity, etc.
[0029] Sensor information resulting from the monitoring may be
analyzed within the hearing assist device, or may be transmitted
from the hearing assist device and analyzed at a remote location.
For instance, the sensor information may be analyzed at a local
computer, in a smart phone or other mobile device, or at a remote
location, such as at a cloud-based server. In response to the
analysis of the sensor information, instructions and/or other
information may be communicated back to the wearer. Such
information may be provided to the wearer by a display screen
(e.g., a desktop computer display, a smart phone display, a tablet
computer display, a medical equipment display, etc.), by the
hearing assist device itself (e.g., by voice, beeps, etc.), or may
be provided to the wearer in another manner. Medical personnel
and/or emergency response personnel (e.g., reachable at the 911
phone number) may be alerted when particular problems with the
wearer are detected by the hearing assist device. The medical
personnel may evaluate information received from the hearing assist
device, and provide information back to the hearing assist
device/wearer. The hearing assist device may provide the wearer
with reminders, alarms, instructions, etc.
[0030] The hearing assist device may be configured with
speech/voice recognition capability. For instance, the wearer may
provide commands, such as by voice, to the hearing assist device.
The hearing assist device may be configured to perform various
audio processing functions to suppress background noise and/or
other sounds, as well amplifying other sounds, and may be
configured to modify audio according to a particular frequency
response of the hearing of the wearer. The hearing assist device
may be configured to detect vibrations (e.g., jaw movement of the
wearer during talking), and may use the detected vibrations to aid
in improving speech/voice recognition.
[0031] Hearing assist devices may be configured in various ways,
according to embodiments. For instance, FIG. 1 shows a
communication system 100 that includes a multi-sensor hearing
assist device 102 that communicates with a near field communication
(NFC)-enabled communications device 104, according to an exemplary
embodiment. Hearing assist device 102 may be worn in association
with the ear of a user, and may be configured to communicate with
other devices, such as communications device 104. As shown in FIG.
1, hearing assist device 102 includes a plurality of sensors 106a
and 106b, processing logic 108, an NFC transceiver 110, storage
112, and a rechargeable battery 114. These features of hearing
assist device 102 are described as follows.
[0032] Sensors 106a and 106b are medical sensors that each sense a
characteristic of the user and generate a corresponding sensor
output signal. Although two sensors 106a and 106b are shown in
hearing assist device 102 in FIG. 1, any number of sensors may be
included in hearing assist device 102, including three sensors,
four sensors, five sensors, etc. (e.g., tens of sensors, hundreds
of sensors, etc.). Examples of sensors for sensors 106a and 106b
include a blood pressure sensor, a heart rate sensor, a temperature
sensor, a humidity sensor, a blood oxygen level sensor, a skin
galvanometric level sensor, a brain wave information sensor, an
arrhythmia onset detection sensor (e.g., a chest strap with
multiple sensor pads), a skin chemistry sensor, a motion sensor
(e.g., to detect falling down impacts, long periods of activity,
etc.), an air pressure sensor, etc. These and further types of
sensors suitable for sensors 106a and 106b are further described
elsewhere herein.
[0033] Processing logic 108 may be implemented in hardware (e.g.,
one or more processors, electrical circuits, etc.), or any
combination of hardware with software and/or firmware. Processing
logic 108 may receive sensor information from sensors 106a, 106b,
etc., and may process the sensor information to generate processed
sensor data. Processing logic 108 may execute one or more programs
that define various operational characteristics, such as: (i) a
sequence or order of retrieving sensor information from sensors of
hearing assist device 102, (ii) sensor configurations and
reconfigurations (via a preliminary setup or via adaptations over
the course of time), (iii) routines by which particular sensor data
is at least pre-processed, and (iv) one or more functions/actions
to be performed based on particular sensor data values, etc.
[0034] For instance, processing logic 108 may store and/or access
sensor data in storage 112, processed or unprocessed. Furthermore,
processing logic 108 may access one or more programs stored in
storage 112 for execution. Storage 112 may include one or more
types of storage, including memory (e.g., random access memory
(RAM), read only memory (ROM), etc.) that is volatile or
non-volatile.
[0035] NFC transceiver 110 is configured to wirelessly communicate
with a second device (e.g., a local or remote supporting device),
such as NFC-enabled communications device 104 according to NFC
techniques. NFC uses magnetic induction between two loop antennas
(e.g., coils, microstrip antennas, etc.) located within each
other's near field, effectively forming an air-core transformer. As
such, NFC communications occur over relatively short ranges (e.g.,
within a few centimeters), and are conducted at radio frequencies.
For instance, in one example, NFC communications may be performed
by NFC transceiver 110 at a 13.56 MHz frequency, with data
transfers of up to 424 kilobits per second. In other embodiments,
NFC transceiver 110 may be configured to perform NFC communications
at other frequencies and data transfer rates. Examples of standards
according to which NFC transceiver 110 may be configured to conduct
NFC communications include ISO/IEC 18092 and those defined by the
NFC Forum, which was founded in 2004 by Nokia, Philips and
Sony.
[0036] NFC-enabled communications device 104 may be configured with
an NFC transceiver to perform NFC communications. NFC-enabled
communications device 104 may be any type of device that may be
enabled with NFC capability, such as a docking station, a desktop
computer (e.g., a personal computer, etc.), a mobile computing
device (e.g., a personal digital assistant (PDA), a laptop
computer, a notebook computer, a tablet computer (e.g., an Apple
iPad.TM.), a netbook, etc.), a mobile phone (e.g., a cell phone, a
smart phone, etc.), a medical appliance, etc. Furthermore,
NFC-enabled communications device 104 may be network-connected to
enable hearing assist device 102 to communicate with entities over
the network (e.g., cloud computers or servers, web services,
etc.).
[0037] NFC transceiver 102 enables sensor data (processed or
unprocessed) to be transmitted by processing logic 108 from hearing
assist device 102 to NFC-enabled communications device 104. In this
manner, the sensor data may be reported, processed, and/or analyzed
externally to hearing assist device 102. Furthermore, NFC
transceiver 102 enables processing logic 108 at hearing assist
device 102 to receive data and/or instructions/commands from
NFC-enabled communications device 104 in response to the
transmitted sensor data. Furthermore, NFC transceiver 102 enables
processing logic 108 at hearing assist device 102 to receive
programs (e.g., program code), including new programs, program
updates, applications, "apps", and/or other programs from
NFC-enabled communications device 104 that can be executed by
processing logic 108 to change/update the functionality of hearing
assist device 102.
[0038] Rechargeable battery 114 is a rechargeable battery that
includes one or more electrochemical cells that store charge that
may be used to power components of hearing assist device 102,
including one or more of sensor 106a, 106b, etc., processing logic
108, NFC transceiver 110, and storage 112. Rechargeable battery 114
may be any suitable rechargeable battery type, including lead-acid,
nickel cadmium (NiCd), nickel metal hydride (NiMH), lithium ion
(Li-ion), and lithium ion polymer (Li-ion polymer). Charging of the
batteries may be through a typical tethered recharger or via NFC
power delivery.
[0039] Although NFC communications are shown, alternative
communication approaches can be employed. Such alternatives may
include wireless power transfer schemes as well.
[0040] Hearing assist device 102 may be configured in any manner to
be associated with the ear of a user. For instance, FIGS. 2-4 show
various configurations for associating a hearing assist device with
an ear of a user, according to exemplary embodiments. In FIG. 2,
hearing assist device 102 may be a hearing aid type that fits and
is inserted partially or fully in an ear 202 of a user. As shown in
FIG. 2, hearing assist device 102 includes sensors 106a-106n that
contact the user. Examples forms of hearing assist device 102 of
FIG. 2 include ear buds, "receiver in the canal" hearing aids, "in
the ear" (ITE) hearing aids, "invisible in canal" (ITC) hearing
aids, "completely in canal" (CIC) hearing aids, etc. Although not
illustrated, cochlear implant configurations may also be used.
[0041] In FIG. 3, hearing assist device 102 may be a hearing aid
type that mounts on top of, or behind ear 202 of the user. As shown
in FIG. 3, hearing assist device 102 includes sensors 106a-106n
that contact the user. Examples forms of hearing assist device 102
of FIG. 3 include "behind the ear" (BTE) hearing aids, "open fit"
or "over the ear" (OTE) hearing aids, eyeglasses hearing aids
(e.g., that contain hearing aid functionality in or on the glasses
arms), etc.
[0042] In FIG. 4, hearing assist device 102 may be a headset or
head phones that mounts on the head of the user and include
speakers that are held close to the user's ears. As shown in FIG.
4, hearing assist device 102 includes sensors 106a-106n that
contact the user. In the embodiment of FIG. 4, sensors 106a-106n
may be spaced further apart in the headphones, including being
dispersed in the ear pad(s) and/or along the headband that connects
together the ear pads (when a head band is present).
[0043] It is noted that hearing assist device 102 may be configured
in further forms, including combinations of the forms shown in
FIGS. 2-4, and is not intended to be limited to the embodiments
illustrated in FIGS. 2-4. For instance, hearing assist device 102
may be a cochlear implant-type hearing aid, or other type of
hearing assist device. The following section describes some example
forms of hearing assist device 102 with associated sensor
configurations.
III. Example Hearing Assist Device Forms and Sensor Array
Embodiments
[0044] As described above, hearing assist device 102 may be
configured in various forms, and may include any number and type of
sensors. For instance, FIG. 5 shows a hearing assist device 500
that is an example of hearing assist device 102 according to an
exemplary embodiment. Hearing assist device 500 is configured to
mount over an ear of a user, and has a portion that is at least
partially inserted into the ear. A user may wear a single hearing
assist device 500 on one ear, or may simultaneously wear first and
second hearing assist devices 500 on the user's right and left
ears, respectively.
[0045] As shown in FIG. 5, hearing assist device 500 includes a
case or housing 502 that includes a first portion 504, a second
portion 506, and a third portion 508. First portion 504 is shaped
to be positioned behind/over the ear of a user. For instance, as
shown in FIG. 5, first portion 504 has a crescent shape, and may
optionally be molded in the shape of a user's outer ear (e.g., by
taking an impression of the outer ear, etc.). Second portion 506
extends perpendicularly from a side of an end of first portion 504.
Second portion 506 is shaped to be inserted at least partially into
the ear canal of the user. Third portion 508 extends from second
portion 506, and may be referred to as an earmold shaped to conform
to the user's ear shape, to better adhere hearing assist device 500
to the user's ear.
[0046] As shown in FIG. 5, hearing assist device 500 further
includes a speaker 512, a forward IR/UV (ultraviolet) communication
transceiver 520, a BTLE (BLUETOOTH low energy) antenna 522, at
least one microphone 524, a telecoil 526, a tethered sensor port
528, a skin communication conductor 534, a volume controller 540,
and a communication and power delivery coil 542. Furthermore,
hearing assist device 500 includes a plurality of medical sensors,
including at least one pH sensor 510, an IR (infrared) or sonic
distance sensor 514, an inner ear temperature sensor 516, a
position/motion sensor 518, a WPT (wireless power transfer)/NFC
coil 530, a switch 532, a glucose spectroscopy sensor 536, a heart
rate sensor 538, and a subcutaneous sensor 544. In embodiments,
hearing assist device 500 may include one or more of these further
features and/alternative features. The features of hearing assist
device 500 are described as follows.
[0047] As shown in FIG. 5, speaker 512, IR or sonic distance sensor
514, and inner ear temperature sensor 516 are located on a circular
surface of second portion 506 of hearing assist device 500 that
faces into the ear of the user. Position/motion sensor 518 and pH
sensor 510 are located on a perimeter surface of second portion 506
around the circular surface that contacts the ear canal of the
user. In alternative embodiments, one or more of these features may
be located in/on different locations of hearing assist device
500.
[0048] pH sensor 510 is a sensor that may be present to measure a
pH of skin of the user's inner ear. The measured pH value may be
used to determine a medical problem of the user, such an onset of
stroke. pH sensor 510 may include one or more metallic plates. Upon
receiving power (e.g., from rechargeable battery 114 of FIG. 1), pH
sensor 510 may generate a sensor output signal (e.g., an electrical
signal) that indicates a measured pH value.
[0049] Speaker 512 (also referred to as a "loudspeaker") is a
speaker of hearing assist device 500 that broadcasts environmental
sound received by microphone(s) 524, that is subsequently amplified
and/or filtered by processing logic of the hearing assist device
600, into the ear of the user to assist the user in hearing the
environmental sound. Furthermore, speaker 512 may broadcast
additional sounds into the ear of the user for the user to hear,
including alerts (e.g., tones, beeping sounds), voice, and/or
further sounds that may be generated by or received by processing
logic of hearing assist device 500, and/or may be stored in hearing
assist device 500.
[0050] IR or sonic distance sensor 514 is a sensor that may be
present to sense a displacement distance. Upon receiving power, IR
or sonic distance sensor 514 may generate an IR light pulse, a
sonic (e.g., ultrasonic) pulse, or other light or sound pulse, that
may be reflected in the ear of the user, and the reflection may be
received by IR or sonic distance sensor 514. A time of reflection
may be compared for a series of pulses to determine a displacement
distance within the ear of user. IR or sonic distance sensor 514
may generate a sensor output signal (e.g., an electrical signal)
that indicates a measured displacement distance.
[0051] A distance and eardrum deflection that is determined using
IR or sonic distance sensor 514 (e.g., by using a high rate
sampling or continuous sampling) may be used to calculate an
estimate of the "actual" or "true" decibel level of an audio signal
being input to the ear of the user. By incorporating such
functionality, hearing assist device 500 can perform the following
when a user inserts and turns on hearing assist device 500: (i)
automatically adjust the volume to fall within a target range; and
(ii) prevent excess volume associated with unexpected loud sound
events. It is noted that the amount of volume adjustment that may
be applied can vary by frequency. It is also noted that the excess
volume associated with unexpected loud sound events may be further
prevented by using a hearing assist device that has a relatively
tight fit, thereby allowing the hearing assist device to act as an
ear plug.
[0052] Hearing efficiency and performance data over the spectrum of
normal audible frequencies can be gathered by delivering each
frequency (or frequency range) at an output volume level, measuring
eardrum deflection characteristics, and delivering audible test
questions to the user via hearing assist device 500. This can be
accomplished solely by hearing assist device 500 or with assistance
from a smartphone or other external device or service. For example,
a user may respond to an audio (or textual) prompt "Can you hear
this?" with a "yes" or "no" response. The response is received by
microphone(s) 524 (or via touch input for example) and processed
internally or on an assisting external device to identify the
response. Depending on the user's response, the amplitude of the
audio output can be adjusted to determine a given user's hearing
threshold for each frequency (or frequency range). From this
hearing efficiency and performance data, input frequency
equalization can be performed by hearing assist device 500 so as to
deliver to the user audio signals that will be perceived in much
the same way as someone with no hearing impairment. In addition,
such data can be delivered to the assisting external device (e.g.,
to a smartphone) for use by such device in producing audio output
for the user. For example, the assisting device can deliver an
adjusted audio output tailored for the user if (i) the user is not
wearing hearing assist device 500, (ii) the battery power of
hearing assist device 500 is depleted, (iii) hearing assist device
500 is powered down, or (iv) hearing assist device 500 is operating
in a lower power mode. In such situations, the supporting device
can deliver the audio signal: (a) in an audible form via a speaker
which will be generated with intent of directly reaching the
eardrum; (b) in an audible form intended for receipt and
amplification control by hearing assist device 500 without further
need for user specific audio equalization; and (c) in a non-audible
form (e.g.) electromagnetic transmission for receipt and conversion
to an audible form by hearing assist device 500 and again without
further equalization.
[0053] After testing and setup, a wearer may further tweak their
recommended equalization via slide bars and such in a manner
similar to adjusting equalization for other conventional audio
equipment. Such tweaking can be carried out via the supporting
device user interface. In addition, a plurality of equalization
settings can be supported with each being associated with a
particular mode of operation of hearing assist device 500. That is
conversation in a quiet room with one other might receive one
equalization profile while a concert hall might receive another.
Modes can be selected in many automatic or commanded ways via
either or both hearing assist device 500 and the external
supporting device. Automatic selection can be performed via
analysis and classification of captured audio. Certain
classifications may trigger selection of a particular mode.
Commands may delivered via any user input interface such as voice
input (voice recognized commands), tactile input commands, etc.
[0054] Audio modes also comprise alternate or additional audio
processing techniques as well. For example, in one mode, to enhance
audio perspective and directionality, delays might be selectively
introduced (or increased in a stereoscopic manner) to enhance a
wearer's ability to discern the location of an audio source. Sensor
data may support automatic mode selection in such situations.
Detecting walking impacts and outdoor GPS (Global Positioning
System) location might automatically trigger such enhanced
perspective mode. A medical condition might trigger another mode
which attenuates environmental audio while delivering synthesized
voice commands to the wearer. In another exemplary mode, both
echoes and delays might be introduced to simulate a theater
environment. For example, when audio is being sourced by a
television channel broadcast of a movie, the theater environment
mode might be selected. Such selection may be in response to a set
top box, television or media player's commands or by identifying
one of the same as the audio source.
[0055] Other similar and all of such functionality can be carried
out by one or both of hearing assist device 500 and an external
supporting device. When assisting the hearing aid device, the
external supporting device may receive the audio for processing:
(i) directly via built in microphones; (ii) from storage; or (iii)
via yet another external device. Alternatively, the source audio
may be captured by hearing assist device 500 itself and delivered
via a wired or wireless pathway to the external supporting device
for processing before delivery of either the processed audio
signals or substitute audio back to hearing assist device 500 for
delivery to the wearer.
[0056] Similarly, sensor data may be captured in one or both of
hearing assist device 500 and an external supporting device. Sensor
data captured by hearing assist device 500 may likewise be
delivered via such or other wired or wireless pathways to the
external supporting device for (further) processing. The external
supporting device may then respond to the sensor data received and
processed by delivering audio content and/or hearing aid commands
back to hearing assist device 500. Such commands may be to
reconfigure some aspect of hearing assist device 500 or manage
communication or power delivery. Such audio content may be
instructional, comprise queries, or consist of commands to be
delivered the wearer via the ear drums. Sensor data may be stored
and displayed in some form locally on the external supporting
device along with similar audio, graphical or textual content,
commands or queries. In addition, such sensor data can be further
delivered to yet other external supporting devices for further
processing, analysis and storage. Sensors within one or both
hearing assist device 500 and an external supporting device may be
medical sensors or environmental sensors (e.g., latitude/longitude,
velocity, temperature, wearer's physical orientation, acceleration,
elevation, tilt, humidity, etc.).
[0057] Although not shown, hearing assist device 500 may also be
configured with an imager that may be located near transceiver 520.
The imager can then be used to capture images or video that may be
relayed to one or more external supporting device for real time
display, storage or processing. For example, detecting a medical
situation and no response to audible content queries delivered via
hearing assist device 500, the imager can be commanded (internal or
external command origin) to capture an image or a video sequence.
Such imager output can be delivered to medical staff via a user's
supporting smartphone so that a determination can be made as to the
user's condition or the position/location of hearing assist device
500.
[0058] Inner ear temperature sensor 516 is a sensor that may be
present to measure a temperature of the user. For instance, in an
embodiment, upon receiving power, inner ear temperature sensor 516
may include a lens used to measure inner ear temperature. IR light
may be reflected from the user skin by an IR light emitter, such as
the ear canal or ear drum, and received by a single temperature
sensor element, a one-dimensional array of temperature sensor
elements, a two-dimensional array of temperature sensor elements,
or other configuration of temperature sensor elements. Inner ear
temperature sensor 516 may generate a sensor output signal (e.g.,
an electrical signal) that indicates a measured inner ear
temperature.
[0059] Such a configuration may also be used to determine a
distance to the user's ear drum. The IR light emitter and sensor
may be used to determine a distance to the user's ear drum from
hearing assist device 500, which may be used by processing logic to
automatically control a volume of sound emitted from hearing assist
device 500, as well as for other purposes. Furthermore, the IR
light emitter/sensor may also be used as an imager that captures an
image of the inside of the user's ear. This could be used to
identify characteristics of vein structures inside the user's ear,
for example. The IR light emitter/sensor could also be used to
detect the user's heartbeat, as well as to perform further
functions. Note that hearing assist device 500 may include a light
sensor that senses outdoor light levels for various purposes.
[0060] Position/motion sensor 518 includes one or more sensors that
may be present to measure time of day, location, acceleration,
orientation, vibrations, and/or other movement related
characteristics of the user. For instance, position/motion sensor
518 may include one or more of a GPS (global positioning system)
receiver (to measure user position), an accelerometer (to measure
acceleration of the user), a gyroscope (to measure orientation of
the head of the user), a magneto (to determine a direction the user
is facing), a vibration sensor (e.g., a micro-electromechanical
system (MEMS) vibration sensor), etc. Position/motion sensor 518
may be used for various benefits, including determining whether a
user has fallen (e.g., based on measured position, acceleration,
orientation, etc.), for local VoD, and many more benefits.
Position/motion sensor 518 may generate a sensor output signal
(e.g., an electrical signal) that indicates one or more of the
measured time of day, location, acceleration, orientation,
vibration, etc.
[0061] In one example, MEMS sensors may be configured to record the
position/movement of the head of a wearer for health purposes, for
location applications, and/or for other reasons. A wireless
transceiver and a MEMS sensor of hearing assist device 500 can
determine the location of the head of the user, which may be a more
meaningful positioning reference point than a position of a mobile
device (e.g., cellphone) held against the user's head by the user.
In this manner, hearing assist device 500 may be configured to tell
the user that the user is not looking at the road properly when
driving. In another example, in this manner, hearing assist device
500 may determine that the wearer fell down and may send a
communication signal to the user's mobile device to dial 911 or
other emergency number. If the user is wearing a pair of hearing
assist devices 500, wireless communication signals may be used to
help triangulate and determine position of the head. The user may
shake their head up/down and/or may otherwise move their head to
answer verbal commands provided by hearing assist device 500 and/or
by the user's phone without the user having to speak. The user may
enabled to speak to hearing assist device 500 to respond to
commands (e.g., "did you fall?", "are you alright?", "should I dial
for help?", "are you falling asleep?", etc.). Position data can be
processed in hearing assist device 500, in the mobile device, in
the "cloud", etc. In an embodiment, to save power, the position
data may be used to augment mobile/cloud data for better accurate
and special circumstances. Hearing assist device 500 may determine
a proximity to the mobile device of the user even if the camera on
the mobile device is not in view. Based upon position and sensors,
hearing assist device 500 may determine the direction the person is
looking at to aid artificial reality. In an embodiment, hearing
assist device 500 may be configured to calibrate position data when
the head is in view of a remote camera.
[0062] The sensor information indicated by position/motion sensor
518 and/or other sensors may be used for various purposes. For
instance, position/motion information may be used to determine that
the user has fallen down/collapsed. In response, voice and/or video
assist (e.g., by a handheld device in communication with hearing
assist device 500) may be used to gather feedback from the user
(e.g., to find out if they are ok, and/or to further supplement the
sensor data collection (which triggered the feedback request)).
Such sensor data and feedback information, if warranted, can be
automatically forwarded to medical staff, ambulance services,
and/or family members, for example, as described elsewhere herein.
The analysis of the data that triggered the forwarding process may
be performed in whole or in part on one (or both) of hearing assist
device 500, and/or on the assisting local device (e.g., a smart
phone, tablet computer, set top box, TV, etc., in communication
with a hearing assist device 500) and/or remote computing systems
(e.g., at medical staff offices or as might be available through a
cloud or portal service).
[0063] As shown in FIG. 5, forward IR/UV (ultraviolet)
communication transceiver 520, BTLE antenna 522, microphone(s) 524,
telecoil 526, tethered sensor port 528, WPT/NFC coil 530, switch
532, skin communication conductor 534, glucose spectroscopy sensor
536, a heart rate sensor 538, volume controller 540, and
communication and power delivery coil 542 are located at different
locations in/on the first portion 504 of hearing assist device 500.
In alternative embodiments, one or more of these features may be
located in/on different locations of hearing assist device 500.
[0064] Forward IR/UV communication transceiver 520 is a
communication mechanism that may be present to enable
communications with another device, such as a smart phone,
computer, etc. Forward IR/UV communication transceiver 520 may
receive information/data from processing logic of hearing assist
device 500 to be transmitted to the other device in the form of
modulated light (e.g., IR light, UV light, etc.), and may receive
information/data in the form of modulated light from the other
device to be provided to the processing logic of hearing assist
device 500. Forward IR/UV communication transceiver 520 may enable
low power communications for hearing assist device 500, to reduce a
load on a battery of hearing assist device 500. In an embodiment,
an emitter/receiver of forward IR/UV communication transceiver 520
may be positioned on housing 502 to be facing forward in a
direction a wearer of hearing assist device 500 faces. In this
manner, the forward IR/UV communication transceiver 520 may
communicate with a device held by the wearer, such as a smart
phone, a tablet computer, etc., to provide text to be displayed to
the wearer, etc.
[0065] BTLE antenna 522 is a communication mechanism coupled to a
Bluetooth.TM. transceiver in hearing assist device 500 that may be
present to enable communications with another device, such as a
smart phone, computer, etc. BTLE antenna 522 may receive
information/data from processing logic of hearing assist device 500
to be transmitted to the other device according to the
Bluetooth.TM. specification, and may receive information/data
transmitted according to the Bluetooth.TM. specification from the
other device to be provided to the processing logic of hearing
assist device 500.
[0066] Microphone(s) 524 is a sensor that may be present to receive
environmental sounds, including voice of the user, voice of other
persons, and other sounds in the environment (e.g., traffic noise,
music, etc.). Microphone(s) 524 may include any number of
microphones, and may be configured in any manner, including being
omni-directional (non-directional), directional, etc. Microphone(s)
524 generates an audio signal based on the received environmental
sound that may be processed and/or filtered by processing logic of
hearing assist device 500, may be stored in digital form in hearing
assist device 500, may be transmitted from hearing assist device
500, and may be used in other ways.
[0067] Telecoil 526 is a communication mechanism that may be
present to enable communications with another device. Telecoil 526
is an audio induction loop that enables audio sources to be
directly coupled to hearing assist device 500 in a manner known to
persons skilled in the relevant art(s). Telecoil 526 may be used
with a telephone, a radio system, and induction loop systems that
transmit sound to hearing aids.
[0068] Tethered sensor port 528 is a port that a remote sensor
(separate from hearing assist device 500) may be coupled with to
interface with hearing assist device 500. For instance, port 528
may be an industry standard or proprietary connector type. A remote
sensor may have a tether (one or more wires) with a connector at an
end that may be plugged into port 528. Any number of tethered
sensor ports 528 may be present. Examples of sensor types that may
interface with tethered sensor port 528 include brainwave sensors
(e.g., electroencephalography (EEG) sensors that record electrical
activity along the scalp according to EEG techniques) attached to
the user's scalp, heart rate/arrhythmia sensors attached to a chest
of the user, etc. Such brainwave sensors may record/measure
electrical signals of the user's brain.
[0069] WPT/NFC coil 530 is a communication mechanism coupled to a
NFC transceiver in hearing assist device 500 that may be present to
enable communications with another device, such as a smart phone,
computer, etc., as described above with respect to NFC transceiver
110 (FIG. 1).
[0070] Switch 532 is a switching mechanism that may be present on
housing 502 to perform various functions, such as switching power
on or off, switching between different power and/or operational
modes, etc. A user may interact with switch 532 to switch power on
or off, to switch between modes, etc. Switch 532 may be any type of
switch, including a toggle switch, a push button switch, a rocker
switch, a three-(or greater) position switch, a dial switch,
etc.
[0071] Skin communication conductor 534 is a communication
mechanism coupled to a transceiver in hearing assist device 500
that may be present to enable communications with another device,
such as a smart phone, computer, etc., through skin of the user.
For instance, skin communication conductor 534 may enable
communications to flow between hearing assist device 500 and a
smart phone held in the hand of the user, a second hearing assist
device worn on an opposite ear of the user, a pacemaker or other
device implanted in the user, or other communications device in
communication with skin of the user. A transceiver of hearing
assist device 500 may receive information/data from processing
logic to be transmitted from skin communication conductor 534
through the user's skin to the other device, and the transceiver
may receive information/data at skin communication conductor 534
that was transmitted from the other device through the user's skin
to be provided to the processing logic of hearing assist device
500.
[0072] Glucose spectroscopy sensor 536 is a sensor that may be
present to measure a glucose level of the user using spectroscopy
techniques in a manner known to persons skilled in the relevant
art(s). Such a measurement may be valuable in determining whether a
user has diabetes. Such a measurement can also be valuable in
helping a diabetic user determine whether insulin is needed, etc.
(e.g., hypoglycemia or hyperglycemia). Glucose spectroscopy sensor
536 may be configured to monitor glucose in combination with
subcutaneous sensor 544. As shown in FIG. 5, subcutaneous sensor
544 is shown separate from, and proximate to hearing assist device
500. In such an embodiment, subcutaneous sensor 544 may be imbedded
in the user's skin, in or around the user's ear. In an alternative
embodiment, subcutaneous sensor 544 may be located in/on hearing
assist device 500. Subcutaneous sensor 544 is a sensor that may be
present to measure any attribute of a user's health,
characteristics or status. For example, subcutaneous sensor 544 may
be a glucose sensor implanted under the skin behind the ear so as
to provide a reasonably close mating location with communication
and power delivery coil 542. When powered, glucose spectroscopy
sensor 536 may measure the user glucose level with respect to
subcutaneous sensor 544, and may generate a sensor output signal
(e.g., an electrical signal) that indicates a glucose level of the
user/
[0073] Heart rate sensor 538 is a sensor that may be present to
measure a heart rate of the user. For instance, in an embodiment,
upon receiving power, heart rate sensor 538 may pressure changes
with respect to a blood vessel in the ear, or may measure heart
rate in another manner such as changes in reflectivity or otherwise
as would be known to persons skilled in the relevant art(s). Missed
beats, elevated heart rate, and further heart conditions may be
detected in this manner. Heart rate sensor 538 may generate a
sensor output signal (e.g., an electrical signal) that indicates a
measured heart rate. In addition, subcutaneous sensor 544 might
comprise at least a portion of an internal heart monitoring device
which communicates via communication and power delivery coil 542
heart status information and data. Subcutaneous sensor 544 could
also be associated with or be part of a pacemaker or defibrillating
implant, insulin pump, etc.
[0074] Volume controller 540 is a user interface mechanism that may
be present on housing 502 to enable a user to modify a volume at
which sound is broadcast from speaker 512. A user may interact with
volume controller 520 to increase or decrease the volume. Volume
controller 540 may be any suitable controller type (e.g., a
potentiometer), including a rotary volume dial, a thumb wheel, a
capacitive touch sensing device, etc.
[0075] Instead of supporting both power delivery and
communications, communication and power delivery coil 542 may be
dedicated to one or the other. For example, such coil may only
support power delivery (if needed to charge or otherwise deliver
power to subcutaneous sensor 544), and can be replaced with any
other type of communication system that supports communication with
subcutaneous sensor 544.
[0076] It is noted that the coils/antennas of hearing assist device
500 may be separately included in hearing assist device 500, or in
embodiments, two or more of the coils/antennas may be combined as a
single coil/antenna.
[0077] The processing logic of hearing assist device 500 may be
operable to set up/configure and adaptively reconfigure each of the
sensors of hearing assist device 500 based on an analysis of the
data obtained by such sensor as well as on an analysis of data
obtained by other sensors. For example, a first sensor of hearing
assist device 500 may be configured to operate at one sampling rate
(or sensing rate) which is analyzed periodically or continuously.
Furthermore, a second sensor of hearing assist device 500 can be in
a sleep or power down mode to conserve battery power. When a
threshold is exceeded or other triggering event occurs, such first
sensor can be reconfigured by the processing logic of hearing
assist device 500 to sample at a higher rate or continuously and
the second sensor can be powered up and configured. Additionally,
multiple types of sensor data can be used to construct or derive
single conclusions. For example, heart rate can be gathered
multiple ways (via multiple sensors) and combined to provide a more
robust and trustworthy conclusion. Likewise, a combination of data
obtained from different sensors (e.g., pH plus temperature plus
horizontal posture plus impact detected plus weak heart rate) may
result in an ambulance being called or indicate a possible heart
attack. Or, if glucose is too high, hyperglycemia may be indicated
while if glucose it too low, hypoglycemia may be indicated. Or, if
glucose and heart data is acceptable, then a stroke may be
indicated. This processing can be done in whole or in part within
hearing assist device 500 with audio content being played to the
wearer thereof to gather further voiced information from the wearer
to assist in conclusions or to warn the wearer.
[0078] FIG. 6 shows a hearing assist device 600 that is an example
of hearing assist device 102 according to an exemplary embodiment.
Hearing assist device 600 is configured to be at least partially
inserted into the ear canal of a user (e.g., an ear bud). A user
may wear a single hearing assist device 600 on one ear, or may
simultaneously wear first and second hearing assist devices 600 on
the user's right and left ears, respectively.
[0079] As shown in FIG. 6, hearing assist device 600 includes a
case or housing 602 that has a generally cylindrical shape, and
includes a first portion 604, a second portion 606, and a third
portion 608. First portion 604 is shaped to be inserted at least
partially into the ear canal of the user. Second portion 606
extends coaxially from first portion 604. Third portion 608 is a
handle that extends from second portion 606. A user grasps third
portion 608 to extract hearing assist device 600 from the ear of
the user.
[0080] As shown in FIG. 6, hearing assist device 600 further
includes pH sensor 510, speaker 512, IR (infrared) or sonic
distance sensor 514, inner ear temperature sensor 516, and an
antenna 610. pH sensor 510, speaker 512, IR (infrared) or sonic
distance sensor 514, inner ear temperature sensor 516 may function
and be configured similarly as described above. In another
embodiment, hearing assist device 600 may include an outer ear
temperature sensor to determine outside ear temperature. Antenna
610 may be include one or more coils or other types of antennas to
function as any one or more of the coils/antennas described above
with respect to FIG. 5 and/or elsewhere herein (e.g., an NFC
antenna, a Bluetooth.TM. antenna, etc.).
[0081] It is noted that antennas, such as coils, mentioned herein
may be implemented as any suitable type of antenna, including a
coil, a microstrip antenna, or other antenna type. Although further
sensors, communication mechanisms, switches, etc., of hearing
assist device 500 of FIG. 5 are not shown included in hearing
assist device 600, one or more further of these features of hearing
assist device 500 may additionally and/or alternatively be included
in hearing assist device 600. Furthermore, sensors that are present
in a hearing assist device may all operate simultaneously, or one
or more sensors may be run periodically, and may be off at other
times (e.g., based on an algorithm in program code, etc.). By
running fewer sensors at any one time, battery power may be
conserved. Note that in addition to one or more of sensor data
compression, analysis, encryption, and processing, sensor
management (duty cycling, continuous operations, threshold
triggers, sampling rates, etc.) can be performed in whole or in
part in any one or both hear assist devices, the assisting local
device (e.g., smart phone, tablet computer, set top box, TV, etc.),
and/or remote computing systems (at medical staff offices or as
might be available through a cloud or portal service).
[0082] Hearing assist devices 102, 500, and 600 may be configured
in various ways with circuitry to process sensor information, and
to communicate with other devices. The next section describes some
example circuit embodiments for hearing assist devices, as well as
processes for communicating with other devices, and for further
functionality.
IV. Example Hearing Assist Device Circuit and Process
Embodiments
[0083] According to embodiments, hearing assist devices may be
configured in various ways to perform their functions. For
instance, FIG. 7 shows a circuit block diagram of a hearing assist
device 700 that is configured to communicate with external devices
according to multiple communication schemes, according to an
exemplary embodiment. Hearing assist devices 102, 500, and 600 may
each be implemented similarly to hearing assist device 700,
according to embodiments.
[0084] As shown in FIG. 7, hearing assist device 700 includes a
plurality of sensors 702a-702c, processing logic 704, a microphone
706, an amplifier 708, a filter 710, an analog-to-digital (A/D)
converter 712, a speaker 714, an NFC coil 716, an NFC transceiver
718, an antenna 720, a Bluetooth.TM. transceiver 722, a charge
circuit 724, a battery 726, a plurality of sensor interfaces
728a-728c, and a digital-to-analog (D/A) converter 764. Processing
logic 704 includes a digital signal processor (DSP) 730, a central
processing unit (CPU) 732, and a memory 734. Sensors 702a-702c,
processing logic 704, amplifier 708, filter 710, A/D converter 712,
NFC transceiver 718, Bluetooth.TM. transceiver 722, charge circuit
724, sensor interfaces 728a-728c, D/A converter 764, DSP 730, CPU
732, may each be implemented in the form of hardware (e.g.,
electrical circuits, digital logic, etc.) or a combination of
hardware and software/firmware. The features of hearing assist
device 700 shown in FIG. 7 are described as follows.
[0085] For instance, hearing aid functionality of hearing assist
device 700 is first described. In FIG. 7, microphone 706, amplifier
708, filter 710, A/D converter 712, processing logic 704, D/A
converter 764, and speaker 714 provide at least some of the hearing
aid functionality of hearing assist device 700. Microphone 706 is a
sensor that receives environmental sounds, including voice of the
user of hearing assist device 700, voice of other persons, and
other sounds in the environment (e.g., traffic noise, music, etc.).
Microphone 706 may be configured in any manner, including being
omni-directional (non-directional), directional, etc., and may
include one or more microphones. Microphone 706 may be a miniature
microphone conventionally used in hearing aids, as would be known
to persons skilled in the relevant art(s), or may be another
suitable type of microphone. Microphone(s) 524 (FIG. 5) is an
example of microphone 706. Microphone 706 generates a received
audio signal 740 based on the received environmental sound.
[0086] Amplifier 708 receives and amplifies received audio signal
740 to generate an amplified audio signal 742. Amplifier 708 may be
any type of amplifier, including a low-noise amplifier for
amplifying low level signals. Filter 710 receives and processes
amplified audio signal 742 to generate a filtered audio signal 744.
Filter 710 may be any type of filter, including being a filter
configured to filter out noise, other high frequencies, and/or
other frequencies as desired. A/D converter 712 receives filtered
audio signal 742, which may be an analog signal, and converts
filtered audio signal 742 to digital form, to generate a digital
audio signal 746. A/D converter 712 may be configured in any
manner, including as a conventional A/D converter.
[0087] Processing logic 704 receives digital audio signal 746, and
may process digital audio signal 746 in any manner to generate
processed digital audio signal 762. For instance, as shown in FIG.
7, DSP 730 may receive digital audio signal 746, and may perform
digital signal processing on digital audio signal 746 to generate
processed digital audio signal 762. DSP 730 may be configured in
any manner, including as a conventional DSP known to person skilled
in the relevant art(s), or in another manner. DSP 730 may perform
any suitable type of digital signal processing to process/filter
digital audio signal 746, including processing digital audio signal
746 in the frequency domain to manipulate the frequency spectrum of
digital audio signal 746 (e.g., according to Fourier
transform/analysis techniques, etc.). DSP 730 may amplify
particular frequencies, may attenuate particular frequencies, and
may otherwise modify digital audio signal 746 in the discrete
domain. DSP 730 may perform the signal processing for various
reasons, including noise cancelation or hearing loss compensation.
For instance, DSP 730 may process digital audio signal 746 to
compensate for a personal hearing frequency response of the user,
such as compensating for poor hearing of high frequencies, middle
range frequencies, or other personal frequency response
characteristics of the user.
[0088] In one embodiment, DSP 730 may be pre-configured to process
digital audio signal 746. In another embodiment, DSP 730 may
receive instructions from CPU 732 regarding how to process digital
audio signal 746. For instance, CPU 732 may access one or more DSP
configurations in stored in memory 734 (e.g., in other data 768)
that may be provided to DSP 730 to configure DSP 730 for digital
signal processing of digital audio signal 746. For instance, CPU
732 may select a DSP configuration based on a hearing assist mode
selected by a user of hearing assist device 700 (e.g., by
interacting with switch 532, etc.).
[0089] As shown in FIG. 7, D/A converter 764 receives processed
digital audio signal 762, and converts processed digital audio
signal 762 to digital form, generating processed audio signal 766.
D/A converter 764 may be configured in any manner, including as a
conventional D/A converter. Speaker 714 receives processed audio
signal 766, and broadcasts sound generated based on processed audio
signal 766 into the ear of the user. The user is enabled to hear
the broadcast sound, which may be amplified, filtered, and/or
otherwise frequency manipulated with respect to the sound received
by microphone 706. Speaker 714 may be a miniature speaker
conventionally used in hearing aids, as would be known to persons
skilled in the relevant art(s), or may be another suitable type of
speaker. Speaker 512 (FIG. 5) is an example of speaker 714. Speaker
714 may include one or more speakers.
[0090] Hearing assist device 700 of FIG. 7 is further described as
follows with respect to FIGS. 8-14. FIG. 8 shows a flowchart 800 of
a process for a hearing assist device that processes and transmits
sensor data and receives a command from a second device, according
to an exemplary embodiment. In an embodiment, hearing assist device
700 (as well as any of hearing assist devices 102, 500, and 600)
may perform flowchart 800. Further structural and operational
embodiments will be apparent to persons skilled in the relevant
art(s) based on the following description of flowchart 800 and
hearing assist device 700.
[0091] Flowchart 800 begins with step 802. In step 802, a sensor
output signal is received from a medical sensor of the hearing
assist device that senses a characteristic of the user. For
example, as shown in FIG. 7, sensors 702a-702c may each
sense/measure information about a health characteristic of the user
of hearing assist device 700. Sensors 702a-702c may each be one of
the sensors shown in FIGS. 5 and 6, and/or mentioned elsewhere
herein. Although three sensors are shown in FIG. 7 for purposes of
illustration, other numbers of sensors may be present in hearing
assist device 700, including one sensor, two sensors, or greater
numbers of sensors. Sensors 702a-702c each may generate a
corresponding sensor output signal 758a-758c (e.g., an electrical
signal) that indicates the measured information about the
corresponding health characteristic. For instance, sensor output
signals 758a-758c may be analog or digital signals having levels or
values corresponding to the measured information.
[0092] Sensor interfaces 728a-728c are each optionally present,
depending on whether the corresponding sensor outputs a sensor
output signal that needs to be modified to be receivable by CPU
732. For instance, each of sensor interfaces 728a-728c may include
an amplifier, filter, and/or A/D converter (e.g., similar to
amplifier 708, filter 710, and A/D converter 712) that respectively
amplify (e.g., increase or decrease), reduces particular
frequencies, and/or convert to digital form the corresponding
sensor output signal. Sensor interfaces 728a-728c (when present)
respectively output modified sensor output signals 760a-760c.
[0093] In step 804, the sensor output signal is processed to
generate processed sensor data. For instance, as shown in FIG. 7,
processing logic 704 receives modified sensor output signals
760a-760c. Processing logic 704 may process modified sensor output
signals 760a-760c in any manner to generate processed sensor data.
For instance, as shown in FIG. 7, CPU 732 may receive modified
sensor output signals 760a-760c. CPU 732 may process the sensor
information in one or more of modified sensor output signals
760a-760c to generate processed sensor data. For instance, CPU 732
may manipulate the sensor information (e.g., according to an
algorithm of code 738) to convert the sensor information into a
presentable form (e.g., scaling the sensor information, adding or
subtracting a constant to/from the sensor information, etc.).
Furthermore, CPU 732 may transmit the sensor information of
modified sensor output signals 760a-760c to DSP 730 to be digital
signal processed by DSP 730 to generate processed sensor data, and
may receive the processed sensor data from DSP 730. The processed
and/or raw (unprocessed) sensor data may optionally be stored in
memory 734 (e.g., as sensor data 736).
[0094] In step 806, the processed sensor data is wirelessly
transmitted from the hearing assist device to a second device. For
instance, as shown in FIG. 7, CPU 732 may provide the sensor data
(processed or raw) (e.g., from CPU registers, from DSP 730, from
memory 734, etc.) to a transceiver to be transmitted from hearing
assist device 700. In the embodiment of FIG. 7, hearing assist
device 700 includes an NFC transceiver 718 and a BT transceiver
722, which may each be used to transmit sensor data from hearing
assist device 700. In alternative embodiments, hearing assist
device 700 may include one or more additional and/or alternative
transceivers that may transmit sensor data from hearing assist
device 700, including a Wi-Fi transceiver, a forward IR/UV
communication transceiver (e.g., transceiver 520 of FIG. 5), a
telecoil transceiver (which may transmit via telecoil 526), a skin
communication transceiver 534 (which may transmit via skin
communication conductor 534), etc. The operation of such
alternative transceivers will become apparent to persons skilled in
the relevant art(s) based on the teachings provided herein.
[0095] As shown in FIG. 7, NFC transceiver 718 may receive an
information signal 740 from CPU 732 that includes sensor data for
transmitting. In an embodiment, NFC transceiver 718 may modulate
the sensor data onto NFC antenna signal 748 to be transmitted from
hearing assist device 700 by NFC coil 716 when NFC coil 716 is
energized by an RF field generated by a second device.
[0096] Similarly, BT transceiver 722 may receive an information
signal 754 from CPU 732 that includes sensor data for transmitting.
In an embodiment, BT transceiver 722 may modulate the sensor data
onto BT antenna signal 752 to be transmitted from hearing assist
device 700 by antenna 720 (e.g., BTLE antenna 522 of FIG. 5),
according to a Bluetooth.TM. communication protocol or
standard.
[0097] In embodiments, a hearing assist device may transmit/make a
first communication with one or more other devices to provide
sensor data and/or other information, and to receive information.
For instance, FIG. 9 shows a communication system 900 that includes
a hearing assist device communicating with other communication
devices, according to an exemplary embodiment. As shown in FIG. 9,
communication system 900 includes hearing assist device 700, a
mobile computing device 902, a stationary computing device 904, and
a server 906. System 900 is described as follows.
[0098] Mobile computing device 902 (e.g., a local supporting
device) is a device capable of communicating with hearing assist
device 700 according to one or more communication techniques. For
instance, as shown in FIG. 9, mobile computing device 902 includes
a telecoil 910, one or more microphones 912, an IR/UV communication
transceiver 914, a WPT/NFC coil 916, and a Bluetooth.TM. antenna
918. In embodiments, mobile computing device 902 may include one or
more of these features and/or alternative or additional features
(e.g., communication mechanisms, etc.). Mobile computing device 902
may be any type of mobile electronic device, including a personal
digital assistant (PDA), a laptop computer, a notebook computer, a
tablet computer (e.g., an Apple iPad.TM.), a netbook, a mobile
phone (e.g., a cell phone, a smart phone, etc.), a special purpose
medical device, etc. The features of mobile computing device 902
shown in FIG. 9 are described as follows.
[0099] Telecoil 910 is a communication mechanism that may be
present to enable mobile computing device 902 to communicate with
hearing assist device 700 via a telecoil (e.g., telecoil 526 of
FIG. 5). For instance, telecoil 910 and an associated transceiver
may enable mobile computing device 902 to couple audio sources
and/or other communications to hearing assist device 700 in a
manner known to persons skilled in the relevant art(s).
[0100] Microphone(s) 912 may be present to receive voice of a user
of mobile computing device 902. For instance, the user may provide
instructions for mobile computing device 902 and/or for hearing
assist device 700 by speaking into microphone(s) 912. The received
voice may be transmitted to hearing assist device 700 (in digital
or analog form) according to any communication mechanism, or may be
converted into data and/or commands to be provided to hearing
assist device 700 to cause functions/actions in hearing assist
device 700. Microphone(s) 912 may include any number of
microphones, and may be configured in any manner, including being
omni-directional (non-directional), directional, etc.
[0101] IR/UV communication transceiver 914 is a communication
mechanism that may be present to enable communications with hearing
assist device 700 via an IR/UV communication transceiver of hearing
assist device 700 (e.g., forward IR/UV communication transceiver
520 of FIG. 5). IR/UV communication transceiver 914 may receive
information/data from and/or transmit information/data to hearing
assist device 700 (e.g., in the form of modulated light, as
described above).
[0102] WPT/NFC coil 916 is an NFC antenna coupled to a NFC
transceiver in mobile computing device 902 that may be present to
enable NFC communications with an NFC communication mechanism of
hearing assist device 700 (e.g., NFC transceiver 110 of FIG. 1, NFC
coil 530 of FIG. 5). WPT/NFC coil 916 may be used to receive
information/data from and/or transmit information/data to hearing
assist device 700.
[0103] Bluetooth.TM. antenna 918 is a communication mechanism
coupled to a Bluetooth.TM. transceiver in mobile computing device
902 that may be present to enable communications with hearing
assist device 700 (e.g., BT transceiver 722 and antenna 720 of FIG.
7). Bluetooth.TM. antenna 918 may be used to receive
information/data from and/or transmit information/data to hearing
assist device 700.
[0104] As shown in FIG. 9, mobile computing device 902 and hearing
assist device 700 may exchange communication signals 920 according
to any communication mechanism/protocol/standard mentioned herein
or otherwise known. According to step 806, hearing assist device
700 may wirelessly transmit sensor data to mobile computing device
902.
[0105] Stationary computing device 904 (e.g., a local supporting
device) is also a device capable of communicating with hearing
assist device 700 according to one or more communication
techniques. For instance, stationary computing device 904 may be
capable of communicating with hearing assist device 700 according
to any of the communication mechanisms shown for mobile computing
device 902 in FIG. 9, and/or according to other communication
mechanisms/protocols/standards described elsewhere herein or
otherwise known. Stationary computing device 904 may be any type of
stationary electronic device, including a desktop computer (e.g., a
personal computer, etc.), a docking station, a set top box, a
gateway device, an access point, special purpose medical equipment,
etc.
[0106] As shown in FIG. 9, stationary computing device 904 and
hearing assist device 700 may exchange communication signals 922
according to any communication mechanism/protocol/standard
mentioned herein or otherwise known. According to step 806, hearing
assist device 700 may wirelessly transmit sensor data to stationary
computing device 904.
[0107] It is noted that mobile computing device 902 (and/or
stationary computing device 904) may communicate with server 906
(e.g., a remote supporting device, a third device). For instance,
as shown in FIG. 9, mobile computing device (and/or stationary
computing device 904) may be communicatively coupled with server
906 by network 908. Network 908 may be any type of communication
network, including a local area network (LAN), a wide area network
(WAN), a personal area network (PAN), a phone network (e.g., a
cellular network, a land based network), or a combination of
communication networks, such as the Internet. Network 908 may
include wired and/or wireless communication pathway(s) implemented
using any of a wide variety of communication media and associated
protocols. For example, such communication pathway(s) may comprise
wireless communication pathways implemented via radio frequency
(RF) signaling, infrared (IR) signaling, or the like. Such
signaling may be carried out using long-range wireless protocols
such as WIMAX.RTM. (IEEE 802.16) or GSM (Global System for Mobile
Communications), medium-range wireless protocols such as WI-FI.RTM.
(IEEE 802.11), and/or short-range wireless protocols such as
BLUETOOTH.RTM. or any of a variety of IR-based protocols. Such
communication pathway(s) may also comprise wired communication
pathways established over twisted pair, Ethernet cable, coaxial
cable, optical fiber, or the like, using suitable communication
protocols therefor. It is noted that security protocols (e.g.,
private key exchange, etc.) may be used to protect sensitive health
information that is communicated by hearing assist device 700 to
and from remote devices.
[0108] Server 906 may be any computer system, including a
stationary computing device, a server computer, a mobile computing
device, etc. Server 906 may include a web service, an API
(application programming interface), or other service or interface
for communications.
[0109] Sensor data and/or other information may be transmitted
(e.g., relayed) to server 906 over network 908 to be processed.
After such processing, in response, server 906 may transmit
processed data, instructions, and/or other information through
network 908 to mobile computing device 902 (and/or stationary
computing device 904) to be transmitted to hearing assist device
700 to be stored, to cause a function/action at hearing assist
device 700, and/or for other reason.
[0110] Referring back to FIG. 8, in step 808, at least one command
is received from the second device at the hearing assist device.
For instance, referring to FIG. 7, hearing assist device 700 may
receive a second communication as a wirelessly transmitted
communication signal from a second device at NFC coil 716, antenna
720, or other antenna or communication mechanism at hearing assist
device 700. The communication may include a command and/or may
identify a function, and hearing assist device 700 may respond by
performing the command and/or function. For instance, hearing
assist device 700 may respond by gathering additional sensor data,
by analyzing retrieved sensor data, by performing a command, etc.
Example commands include commands relating to sensor data capture,
such as a command for a particular sensor to perform and/or provide
a measurement, a command related to a sensing configuration (e.g.,
turning on and/or off particular sensors, calibrating particular
sensors, etc.), a command related to a hearing assist device
configuration (e.g., turning on and/or off particular hearing
assist device components, calibrating particular components, etc.),
a command that defines audio playback, etc. A received
communication may define audio playback, such as by including or
causing audio data to be played to the user by a speaker of hearing
assist device 700 as voice or other sound, including or causing
audio data to be played to the user by a speaker of hearing assist
device 700 that prompts for user input (e.g., requests a user
response to a question, etc.), etc.
[0111] For instance, in the example of NFC coil 716, a command may
be transmitted from NFC coil 716 on NFC antenna signal 748 to NFC
transceiver 718. NFC transceiver 718 may demodulate command data
from the received communication signal, and provide the command to
CPU 732. In the example of antenna 720, the command may be
transmitted from antenna 720 on BT antenna signal 752 to BT
transceiver 722. BT transceiver 722 may demodulate command data
from the received communication signal, and provide the command to
CPU 732.
[0112] CPU 732 may execute the received command. The received
command may cause hearing assist device 700 to perform one or more
functions/actions. For instance, in embodiments, the command may
cause hearing assist device 700 to turn on or off, to change modes,
to activate or deactivate one or more sensors, to wirelessly
transmit further information, to execute particular program code
(e.g., stored as code 738 in memory 734), to play a sound (e.g., an
alert, a tone, a beeping noise, pre-recorded or synthesized voice,
etc.) from speaker 714 to the user to inform the user of
information and/or cause the user to perform a function/action,
and/or cause one or more additional and/or alternative
functions/actions to be performed by hearing assist device 700.
Further examples of such commands and functions/actions are
described elsewhere herein.
[0113] In embodiments, a hearing assist device may be configured to
convert received RF energy into charge for storage in a battery of
the hearing assist device. For instance, as shown in FIG. 7,
hearing assist device 700 includes charge circuit 724 for charging
battery 726, which is a rechargeable battery (e.g., rechargeable
battery 114). In an embodiment, charge circuit 724 may operate
according to FIG. 10. FIG. 10 shows a flowchart 1000 of a process
for a wirelessly charging a battery of a hearing assist device,
according to an exemplary embodiment. Flowchart 1000 is described
as follows.
[0114] In step 1002 of flowchart 1000, a radio frequency signal is
received. For example, as shown in FIG. 7, NFC coil 716, antenna
720, and/or other antenna or coil of hearing assist device 700 may
receive a radio frequency (RF) signal. The RF signal may be a
communication signal that includes data (e.g., modulated on the RF
signal), or may be an un-modulated RF signal. Charge circuit 724
may be coupled to one or more of NFC coil 716, antenna 720, or
other antenna to receive the RF signal.
[0115] In step 1004, a charge current is generated that charges a
rechargeable battery of the hearing assist device based on the
received radio frequency signal. In an embodiment, charge circuit
724 is configured to generate a charge current 756 that is used to
charge battery 726. Charge circuit 724 may be configured in various
ways to convert a received RF signal to a charge current. For
instance, charge circuit 724 may include an induction coil to take
power from an electromagnetic field and convert it to electrical
current. Alternatively, charge circuit 724 may include a diode
rectifier circuit that rectifies the received RF signal to a DC
(direct current) signal, and may include one or more charge pump
circuits coupled to the diode rectifier circuit used to create a
higher voltage value from the DC signal. Alternatively, charge
circuit 724 may be configured in other ways to generate charge
current 756 from a received RF signal.
[0116] In this manner, hearing assist device 700 may maintain power
for operation, with battery 726 being charged periodically by RF
fields generated by other devices, rather than needing to
physically replace batteries.
[0117] In another embodiment, hearing assist device 700 may be
configured to generate sound based on received sensor data. For
instance, hearing assist device 700 may operate according to FIG.
11. FIG. 11 shows a flowchart 1100 of a process for generating and
broadcasting sound based on sensor data, according to an exemplary
embodiment. For purposes of illustration, flowchart 1100 is
described as follows with reference to FIG. 7.
[0118] Flowchart 1100 begins with step 1102. In step 1102, an audio
signal is generated based at least on the processed sensor data.
For instance, as described above with respect to steps 802 and 804
of flowchart 800 (FIG. 8), a sensor output signal may be processed
to generate processed sensor data. The processed sensor data may be
stored in memory 736 as sensor data 736, may be held in registers
in CPU 732, or may be present in another location. Audio data for
one or more sounds (e.g., tones, beeping sounds, voice segments,
etc.) may be stored in memory 734 (e.g., as other data 768) that
may be selected for play to the user based on particular sensor
data (e.g., particular values of sensor data, etc.). CPU 732 or DSP
730 may select the audio data corresponding to particular sensor
data from memory 734. Alternatively, CPU 732 may transmit a request
for the audio data from another device using a communication
mechanism (e.g., NFC transceiver 718, BT transceiver 722, etc.).
DSP 730 may receive the audio data from CPU 732, from memory 734,
or from another device, and may generate processed digital audio
signal 762 based thereon.
[0119] In step 1104, sound is generated based on the audio signal,
the sound broadcast from a speaker of the hearing assist device
into the ear of the user. For instance, as shown in FIG. 7, D/A
converter 764 may be present, and may receive processed digital
audio signal 762. D/A converter 764 may convert processed digital
audio signal 762 to digital form to generate processed audio signal
766. Speaker 714 receives processed audio signal 766, and
broadcasts sound generated based on processed audio signal 766 into
the ear of the user.
[0120] In this manner, sounds may be provided to the user by
hearing assist device 700 based at least on sensor data, and
optionally further based on additional information. The sounds may
provide information to the user, and may remind or instruct the
user to perform a function/action. The sounds may include one or
more of a tone, a beeping sound, or a voice that includes at least
one of a verbal instruction to the user, a verbal warning to the
user, or a verbal question to the user. For instance, a tone or a
beeping sound may be provided to the user as an alert based on
particular values of sensor data (e.g., indicating a high
glucose/blood sugar value), and/or a voice instruction may be
provided to the user as the alert based on the particular values of
sensor data (e.g., a voice segment stating "Blood sugar is
low--Insulin is required" or "hey, your heart rate is 80 beats per
minute, your heart is fine, your pacemaker has got 6 hours of
battery left.").
[0121] In another embodiment, hearing assist device 700 may be
configured to generate filtered environmental sound. For instance,
hearing assist device 700 may operate according to FIG. 12. FIG. 12
shows a flowchart 1200 of a process for generating and broadcasting
filtered sound from a hearing assist device, according to an
exemplary embodiment. For purposes of illustration, flowchart 1200
is described as follows with reference to FIG. 7.
[0122] Flowchart 1200 begins with step 1202. In step 1202, an audio
signal is generated based on environmental sound received by at
least one microphone of the hearing assist device. For instance, as
shown in FIG. 7, microphone 706 may generate a received audio
signal 740 based on received environmental sound. Received audio
signal 740 may optionally be amplified, filtered, and converted to
digital form to generate digital audio signal 746, as shown in FIG.
7.
[0123] In step 1204, one or more frequencies of the audio signal
are selectively favored to generate a modified audio signal. As
shown in FIG. 7, DSP 730 may receive digital audio signal 746, and
may perform digital signal processing on digital audio signal 746
to generate processed digital audio signal 762. DSP 730 may favor
one or more frequencies by amplifying particular frequencies,
attenuate particular frequencies, and/or by otherwise filtering
digital audio signal 746 in the discrete domain. DSP 730 may
perform the signal processing for various reasons, including noise
cancelation or hearing loss compensation. For instance, DSP 730 may
process digital audio signal 746 to compensate for a personal
hearing frequency response of the user, such as compensating for
poor hearing of high frequencies, middle range frequencies, or
other personal frequency response characteristics of the user.
[0124] In step 1206, sound is generated based on the modified audio
signal, the sound broadcast from a speaker of the hearing assist
device into the ear of the user. For instance, as shown in FIG. 7,
D/A converter 764 may be present, and may receive processed digital
audio signal 762. D/A converter 764 may convert processed digital
audio signal 762 to digital form to generate processed audio signal
766. Speaker 714 receives processed audio signal 766, and
broadcasts sound generated based on processed audio signal 766 into
the ear of the user.
[0125] In this manner, environmental noise, voice, and other sounds
may be tailored to a particular user's personal hearing frequency
response characteristics. Furthermore, particular noises in the
environment may be attenuated (e.g., road noise, engine noise,
etc.) to be filtered from the received environmental sounds so that
the user may better hear important or desired sounds. Furthermore,
sounds that are desired to be heard (e.g., music, a conversation, a
verbal warning, verbal instructions, sirens, sounds of a nearby car
accident, etc.) may be amplified so that the user may better hear
them.
[0126] In another embodiment, hearing assist device 700 may be
configured to transmit recorded voice of a user to another device.
For instance, hearing assist device 700 may operate according to
FIG. 13. FIG. 13 shows a flowchart 1300 of a process for generating
an information signal in a hearing assist device based on a voice
of a user, and for transmitting the information signal to a second
device, according to an exemplary embodiment. For purposes of
illustration, flowchart 1300 is described as follows with reference
to FIG. 7.
[0127] Flowchart 1300 begins with step 1302. In step 1302, an audio
signal is generated based on a voice of the user received at a
microphone of the hearing assist device. For instance, as shown in
FIG. 7, microphone 706 may generate a received audio signal 740
based on received voice of the user. Received audio signal 740 may
optionally be amplified, filtered, and converted to digital form to
generate digital audio signal 746, as shown in FIG. 7.
[0128] The voice of the user may be any statement made by the user,
including a question, a statement of fact, a command, or any other
verbal sequence. For instance, the user may ask "what is my heart
rate". All such statements made by the user can be those intended
for capture by one or more hearing assist devices, supporting local
and remote systems. Such statements may also include unintentional
sounds such as semi-lucid ramblings, moaning, choking, coughing,
and/or other sounds. Any one or more of the hearing assist devices
and the supporting local device can receive (via microphones) such
audio and forward the audio from the hearing assist device(s) as
needed for further processing. This processing may include voice
and/or sound recognition, comparisons with command words or
sequences, (video, audio) prompting for (gesture, tactile or
audible) confirmation, carrying out commands, storage for later
analysis or playback, and/or forwarding to an appropriate recipient
system for further processing, storage, and/or presentations to
others.
[0129] In step 1304, an information signal is generated based on
the audio signal. As shown in FIG. 7, DSP 730 may receive digital
audio signal 746. In an embodiment, DSP 730 and/or CPU 732 may
generate an information signal from digital audio signal 746 to be
transmitted to a second device from hearing assist device 700. DSP
730 and/or CPU 732 may optionally perform voice/speech recognition
on digital audio signal 746 to recognize spoken words included
therein, and may include the spoken words in the generated
information signal.
[0130] For instance, in an embodiment, code 738 stored in memory
734 may include a voice recognition program that may be executed by
CPU 732 and/or DSP 730. The voice recognition program may use
conventional or proprietary voice recognition techniques.
Furthermore, such voice recognition techniques may be augmented by
sensor data. For instance, as described above, position/motion
sensor 518 may include a vibration sensor. The vibration sensor may
detect vibrations of the user associated with speaking (e.g., jaw
movement of the wearer during talking), and generates corresponding
vibration information/data. The vibration information output by the
vibration sensor may be received by CPU 732 and/or DSP 730, and may
be used to aid in improving speech/voice recognition performed by
the voice recognition program. For instance, the vibration
information may be used by the voice recognition program to detect
breaks between words, to identify the location of spoken syllables,
to identify the syllables themselves, and/or to better perform
other aspects of voice recognition. Alternatively, the vibration
information may be transmitted from hearing assist device 700,
along with the information signal, to a second device to perform
the voice recognition process at the second device (or other
device).
[0131] In step 1306, the generated information signal is
transmitted to the second device. For instance, as shown in FIG. 7,
CPU 732 may provide the information signal (e.g., from CPU
registers, from DSP 730, from memory 734, etc.) to a transceiver to
be transmitted from hearing assist device 700 (e.g., NFC
transceiver 718, BT transceiver 722, or other transceiver).
[0132] Another device, such as mobile computing device 902,
stationary computing device 904, and server 906, which may be
associated devices, third party devices (utilized by third
parties), or be otherwise related to or not related to hearing
assist device 700, may receive the transmitted voice information,
and may analyze the voice (spoken words, moans, slurred words,
etc.) therein to determine one or more functions/actions to be
performed. As a result, one or more functions/actions may be
determined to be performed by hearing assist device 700 or another
device.
[0133] In another embodiment, hearing assist device 700 may be
configured to enable voice to be received and/or generated to be
played to the user. For instance, hearing assist device 700 may
operate according to FIG. 14. FIG. 14 shows a flowchart 1400 of a
process for generating voice to be broadcast to a user, according
to an exemplary embodiment. For purposes of illustration, flowchart
1400 is described as follows with reference to FIG. 7.
[0134] Flowchart 1400 begins with step 1402. In step 1402, a sensor
output signal is received from a medical sensor of the hearing
assist device that senses a characteristic of the user. Similarly
to step 802 of FIG. 8, sensors 702a-702c each sense/measure
information about a health characteristic of the user of hearing
assist device 700. For instance, sensor 702a may sense a
characteristic of the user (e.g., a heart rate, a blood pressure, a
glucose level, a temperature, etc.). Sensors 702a generates sensor
output signal 758a, which indicates the measured information about
the corresponding health characteristic. Sensor interface 728a,
when present, may convert sensor output signal 758a to modified
sensor output signal 760a, to be received by processing logic.
[0135] In step 1404, processed sensor data is generated based on
the sensor output signal. Similarly to step 804 of FIG. 8,
processing logic 704 receives modified sensor output signal 760a,
and may process modified sensor output signal 760a in any manner.
For instance, as shown in FIG. 7, CPU 732 may receive modified
sensor output signal 760a, and may process the sensor information
contained therein to generate processed sensor data. For instance,
CPU 732 may manipulate the sensor information (e.g., according to
an algorithm of code 738) to convert the sensor information into a
presentable form (e.g., scaling the sensor information, adding or
subtracting a constant to/from the sensor information, etc.), or
may otherwise process the sensor information. Furthermore, CPU 732
may transmit the sensor information of modified sensor output
signal 760a to DSP 730 to be digital signal processed.
[0136] In step 1406, a voice audio signal generated based at least
on the processed sensor data is received. In an embodiment, the
processed sensor data generated in step 1404 may be transmitted
from hearing assist device 700 to another device (e.g., as shown in
FIG. 9), and a voice audio signal may be generated at the other
device based on the processed sensor data. In another embodiment,
the voice audio signal may be generated by processing logic 704
based on the processed sensor data. The voice audio signal contains
voice information (e.g., spoken words) that relate to the processed
sensor data. For instance, the voice information may include a
verbal alert, verbal instructions, and/or other verbal information
to be provided to the user based on the processed sensor data
(e.g., based on a value of measured sensor data, etc.). The voice
information may be generated by being synthesized, being retrieved
from memory 734 (e.g., a library of record spoken segments in other
data 768), or being generated from a combination thereof. It is
noted that the voice audio signal may be generated based on
processed sensor data from one or more sensors. DSP 730 may output
the voice audio signal as processed digital audio signal 762.
[0137] In step 1408, voice is broadcast from the speaker into the
ear of the user based on the received voice audio signal. For
instance, as shown in FIG. 7, D/A converter 764 may be present, and
may receive processed digital audio signal 762. D/A converter 764
may convert processed digital audio signal 762 to digital form to
generate processed audio signal 766. Speaker 714 receives processed
audio signal 766, and broadcasts voice generated based on processed
audio signal 766 into the ear of the user.
[0138] In this manner, voice may be provided to the user by hearing
assist device 700 based at least on sensor data, and optionally
further based on additional information. The voice may provide
information to the user, and may remind or instruct the user to
perform a function/action. For instance, the voice may include at
least one of a verbal instruction to the user ("take an iron
supplement"), a verbal warning to the user ("your heart rate is
high"), a verbal question to the user ("have you fallen down, and
do you need assistance?"), or a verbal answer to the user ("your
heart rate is 98 beats per minute").
[0139] The next section describes some example
hardware/software/firmware embodiments for hearing assist devices
and associated remote devices.
V. Hearing Assist Device and Remote Device Embodiments
[0140] In embodiments, hearing assist devices may be configured to
perform various functions using hardware (e.g., circuits), or a
combination of hardware and software/firmware (e.g., code 738 of
FIG. 7, etc.). Furthermore, hearing assist devices may communicate
with remote devices (e.g., mobile computing device 902, stationary
computing device 904, server 906, etc.) that include corresponding
functionality. According to an embodiment, FIG. 15 shows a system
1500 comprising a hearing assist device 1501 and a
cloud/service/phone portable device 1503 that may be
communicatively connected thereto. Hearing assist device 1501 may
comprise, for example and without limitation, one of hearing assist
devices 102, 500, 600, or 700 described above. Although only a
single hearing assist device 1501 is shown in FIG. 15, it is to be
understood that system 1500 may include two hearing assist devices.
Device 1503 may comprise, for example and without limitation,
mobile computing device 902, stationary computing device 904,
server 906, or another remote device that is accessible to hearing
assist device 1501. Thus device 1503 may be local with respect to
the wearer of hearing assist device 1501 or remote with respect to
the wearer of hearing assist device 1501.
[0141] Hearing assist device 1501 includes a number of processing
modules that may be implemented as software or firmware running on
one or more general purpose processors (e.g., CPU 732 of FIG. 7)
and/or DSPs (e.g., DSP 730), as dedicated circuitry, or as a
combination thereof. Such processors and/or dedicated circuitry are
collectively referred to in FIG. 15 as general purpose (DSP) and
dedicated processing circuitry 1513. As shown in FIG. 15, the
processing modules include a speech generation module 1523, a
speech/noise recognition module 1525, an enhanced audio processing
module 1527, a clock/scheduler module 1529, a mode select and
reconfiguration module 1531, and a battery management module
1533.
[0142] As also shown in FIG. 15, hearing assist device 1501 further
includes local storage 1535. Local storage 1535 comprises one or
more volatile and/or non-volatile memory devices or structures that
are internal to hearing assist device 1501 (e.g., memory 734 of
FIG. 7). Such memory devices or structures may be used to store
recorded audio information in an audio playback queue 1537 as well
as to store information and settings 1539 associated with hearing
assist device 1501, a user thereof, a device paired thereto, and to
services (cloud-based or otherwise) accessed by or on behalf of
hearing assist device 1501.
[0143] Hearing assist device 1501 further includes sensor
components and associated circuitry 1541. Such sensor components
and associated circuitry may include but are not limited to one or
more microphones, bone conduction sensors, temperature sensors,
blood pressure sensors, blood glucose sensors, pulse oximetry
sensors, pH sensors, vibration sensors, accelerometers, gyros,
magnetos, any other sensor mentioned elsewhere herein, or the
like.
[0144] Hearing assist device 1501 still further includes user
interface (UI) components and associated circuitry 1543. Such UI
components may include buttons, switches, dials, capacitive touch
sensing devices, or other mechanical components by which a user may
control and configure the operation of hearing assist device 1501
(e.g., switch 532 and volume controller 540). Such UI components
may also comprise capacitive sensing components to allow for
touch-based or tap-based interaction with hearing assist device
1501. Such UI components may further include a voice-based UI. Such
voice-based UI may utilize speech/noise recognition module 1525 to
recognize commands uttered by a user of hearing assist device 1501
and/or speech generation module 1523 to provide output in the form
of pre-defined or synthesized speech. In an embodiment in which
hearing assist device 1501 comprise an integrated part of a pair of
glasses, visor or helmet, user interface component and associated
circuitry 1543 may also comprise a display integrated with or
projected upon a portion of the glasses, visor or helmet for
presenting information to a user.
[0145] Hearing assist device 1501 also includes communication
interfaces and associated circuitry 1545 for carrying out
communication over one or more wired, wireless, or skin-based
communication pathways. Communication interfaces and associated
circuitry 1545 enable hearing assist device 1501 to communicate
with device 1503. Communication interfaces and associated circuitry
1545 may also enable hearing assist device 1501 to communicate with
a second hearing assist device worn by the same user as well as
with other devices.
[0146] Generally speaking, cloud/service/phone/portable device 1503
comprises power resources, processing resources, and storage
resources that can be used by hearing assist device 1501 to assist
in performing certain operations and/or to improve the performance
of such operations when a communication pathway has been
established between the two devices.
[0147] In particular, device 1503 includes a number of assist
processing modules that may be implemented as software or firmware
running on one or more general purpose processors and/or DSPs, as
dedicated circuitry, or as a combination thereof. Such processors
and/or dedicated circuitry are collectively referred to in FIG. 15
as general/dedicated processing circuitry (with hearing assist
device support) 1553. As shown in FIG. 15, the processing modules
include a speech generation assist module 1555, a speech/noise
recognition assist module 1557, an enhanced audio processing assist
module 1559, a clock/scheduler assist module 1561, a mode select
and reconfiguration assist module 1563, and a battery management
assist module 1565.
[0148] As also shown in FIG. 15, device 1503 further includes
storage 1567. Storage 1567 comprises one or more volatile and/or
non-volatile memory devices/structures and/or storage systems that
are internal to or otherwise accessible to device 1503. Such memory
devices/structures and/or storage systems may be used to store
recorded audio information in an audio playback queue 1569 as well
as to store information and settings 1571 associated with hearing
assist device 1501, a user thereof, a device paired thereto, and to
services (cloud-based or otherwise) accessed by or on behalf of
hearing assist device 1501. For instance, storage 1567 may be used
to record commands to be cached in device 1503, such that when a
time window becomes available for device 1571 to communicate with
the outside environment (because of power savings or availability),
such stored commands (and/or other data) may be sent to the user's
mobile device, other devices, the cloud, etc. for processing.
Results of such processing may be transmitted back to device 1503,
to an email address of the user, a text message address of the
user, and/or may be provided to the user in another manner.
[0149] Device 1503 also includes communication interfaces and
associated circuitry 1577 for carrying out communication over one
or more wired, wireless or skin-based communication pathways.
Communication interfaces and associated circuitry 1577 enable
device 1503 to communicate with hearing assist device 1501. Such
communication may be direct (point-to-point between device 1503 and
hearing assist device 1501) or indirect (through one or more
intervening devices or nodes). Communication interfaces and
associated circuitry 1577 may also enable device 1503 to
communicate with other devices or access various remote services,
including cloud-based services.
[0150] In an embodiment in which device 1503 comprises a device
that is carried by or is otherwise locally accessible to a wearer
of hearing assist device 1501, device 1503 may also comprise
supplemental sensor components and associated circuitry 1573 and
supplemental user interface components and associated circuitry
1575 that can be used by hearing assist device 1501 to assist in
performing certain operations and/or to improve the performance of
such operations.
[0151] Further explanation and examples of how external operational
support may be provided to a hearing assist device will now be
provided with continued reference to system 1500 of FIG. 15.
[0152] A prerequisite for providing external operational support to
hearing assist device 1501 by device 1503 may be the establishment
of a communication pathway between device 1503 and hearing assist
device 1501. In one embodiment, the establishment of such a
communication pathway is achieved by implementing a communication
service on hearing assist device 1501 that monitors for the
presence of device 1503 and selectively establishes communication
therewith in accordance with a predefined protocol. Alternatively,
a communication service may be implemented on device 1503 that
monitors for the presence of hearing assist device 1501 and
selectively establishes communication therewith in accordance with
a predefined protocol. Still other methods of establishing a
communication pathway between hearing assist device 1501 and device
1503 may be used.
[0153] Hearing assist device 1501 includes battery management
module 1533 that monitors a state of a battery internal to hearing
assist device 1501. Battery management module 1501 may also be
configured to alert a wearer of hearing assist device 1501 when
such battery is in a low-power state so that the wearer can
recharge the battery. As discussed above, the wearer of hearing
assist device 1501 can cause such recharging to occur by bringing a
portable electronic device within a certain distance of hearing
assist device 1501 such that power may be transferred via an NFC
link, WPT link, or other suitable link for transferring power
between such devices. In an embodiment in which device 1503
comprises such a portable electronic device, hearing assist device
1501 may be said to be utilizing the power resources of device 1503
to assist in the performance of its operations.
[0154] As also noted above, when a communication pathway has been
established between hearing assist device 1501 and device 1503,
hearing assist device 1501 can also utilize other resources of
device 1503 to assist in performing certain operations and/or to
improve the performance of such operations. Whether and when
hearing assist device 1501 so utilizes the resources of device 1503
may vary depending upon the designs of such devices and/or any user
configuration of such devices.
[0155] For example, hearing assist device 1501 may be programmed to
only utilize certain resources of device 1503 when the battery
power available to hearing assist device 1501 has dropped below a
certain level. As another example, hearing assist device 1501 may
be programmed to only utilize certain resources of device 1503 when
it is determined that an estimated amount of power that will be
consumed in maintaining a particular communication pathway between
hearing assist device 1501 and device 1503 will be less than an
estimated amount of power that will be saved by offloading
functionality to and/or utilizing the resources of device 1503. In
accordance with such an embodiment, an assistance feature of device
1503 may be provided when a very low power communication pathway
can be established or exists between hearing assist device 1501 and
device 1503, but that same assistance feature of device 1503 may be
disabled if the only communication pathway that can be established
or exists between hearing assist device 1501 and device 1503 is one
that consumes a relatively greater amount of power.
[0156] Still other decision algorithms can be used to determine
whether and when hearing assist device 1501 will utilize resources
of device 1503. Such algorithms may be applied by battery
management module 1533 of hearing assist device 1501 and/or by
battery management assist module 1565 of device 1503 prior to
activating assistance features of device 1503. Furthermore, a user
interface provided by hearing assist device 1501 and/or device 1503
may enable a user to select which features of hearing assist device
1501 should be able to utilize external operational support and/or
under what conditions such external operational support should be
provided. The settings established by the user may be stored as
part of information and settings 1539 in local storage 1535 of
hearing assist device 1501 and/or as part of information and
settings 1571 in storage 1567 of device 1503.
[0157] In accordance with certain embodiments, hearing assist
device 1501 can also utilize resources of a second hearing assist
device to perform certain operations. For example, hearing assist
device 1501 may communicate with a second hearing assist device
worn by the same user to coordinate distribution or shared
execution of particular operations. Such communication may be
carried out, for example, via a point-to-point link between the two
hearing assist devices or via links between the two hearing assist
devices and an intermediate device, such as a portable electronic
device being carried by a user. The determination of whether a
particular operation should be performed by hearing assist device
1501 versus the second hearing assist device may be made by battery
management module 1533, a battery management module of the second
hearing assist device, or via coordination between both battery
management modules.
[0158] For example, if hearing assist device 1501 has more battery
power available then the second hearing assist device, hearing
assist device 1501 may be selected to perform a particular
operation, such as taking a blood pressure reading or the like.
Such battery imbalance may result from, for example, one hearing
assist device being used at a higher volume than the other over an
extended period of time. Via coordination between the two hearing
assist devices, a more balanced discharging of the batteries of
both devices can be achieved. Furthermore, in accordance with
certain embodiments, certain sensors may be present on hearing
assist device 1501 that are not present on the second hearing
assist device and certain sensors may be present on the second
hearing assist device that are not present on hearing assist device
1501, such that a distribution of functionality between the two
hearing assist devices is achieved by design.
[0159] Hearing assist device 1501 comprises a speech generation
module 1523 that enables hearing assist device 1501 to generate and
output verbal audio information (spoken words or the like) to a
wearer thereof via a speaker of hearing assist device 1501. Such
verbal audio information may be used to implement a voice UI, to
provide speech-based alerts, messages and reminders as part of a
clock/scheduler feature implemented by clock/schedule module 1529,
or to provide emergency alerts or messages to a wearer of hearing
assist device based on a detected medical condition of the wearer,
or the like. The speech generated by speech generation module 1523
may be pre-recorded and/or dynamically synthesized, depending upon
the implementation.
[0160] When a communication pathway has been established between
hearing assist device 1501 and device 1503, speech generation
assist module 1555 of device 1503 may operate to perform all or
part of the speech generation function that would otherwise be
performed by speech generation module 1523 of hearing assist device
1501. Such operation by device 1503 can advantageously cause the
battery power of hearing assist device 1501 to be conserved. Any
speech generated by speech generation assist module 1555 may be
communicated back to hearing assist device 1501 for playback via at
least one speaker of hearing assist device 1501. Any of a wide
variety of well-known speech codecs may be used to carry out such
transmission of speech information in an efficient manner.
Additionally or alternatively, any speech generated by speech
generation assist module 1555 can be played back via one or more
speakers of device 1503 if device 1503 is local with respect to the
wearer of hearing assist device 1501.
[0161] Furthermore, speech generation assist module 1555 may
provide a more elaborate set of features than those provided by
speech generation module 1523, as device 1503 may have access to
greater power, processing and storage resources than hearing assist
device 1501 to support such additional features. For example,
speech generation assist module 1555 may provide a more extensive
vocabulary of pre-recorded words, terms and sentences or may
provide a more powerful speech synthesis engine.
[0162] Hearing assist device 1501 includes a speech/noise
recognition module 1525 that is operable to apply speech and/or
noise recognition algorithms to audio input received via one or
more microphones of hearing assist device 1501. Such algorithms can
enable speech/noise recognition module 1525 to determine when a
wearer of hearing assist device 1501 is speaking and further to
recognize words that are spoken by such wearer, while rejecting
non-speech utterances and noise. Such algorithms may be used, for
example, to enable hearing assist device 1501 to provide a
voice-based UI by which a wearer of hearing assist device 1501 can
exercise voice-based control over the device.
[0163] When a communication pathway has been established between
hearing assist device 1501 and device 1503, speech/noise
recognition assist module 1557 of device 1503 may operate to
perform all or part of the speech/noise recognition functions that
would otherwise be performed by speech/noise recognition module
1525 of hearing assist device 1501. Such operation by device 1503
can advantageously cause the battery power of hearing assist device
1501 to be conserved.
[0164] Furthermore, speech/noise recognition assist module 1557 may
provide a more elaborate set of features than those provided by
speech/noise recognition module 1525, as device 1503 may have
access to greater power, processing and storage resources than
hearing assist device 1501 to support such additional features. For
example, speech/noise recognition assist module 1557 may include a
training program that a wearer of hearing assist device 1501 can
use to train the speech recognition logic to better recognize and
interpret his/her own voice. As another example, speech/noise
recognition assist module 1557 may include a process by which a
wearer of hearing assist device 1501 can add new words to the
dictionary of words that are recognized by the speech recognition
logic. Such additional features may be included in an application
that can be installed by the wearer on device 1503. Such additional
features may also be supported by a user interface that forms part
of supplemental user interface components and associated circuitry
1575. Of course, such features may be included in speech/noise
recognition module 1525 in accordance with certain embodiments.
[0165] Hearing assist device 1501 includes an enhanced audio
processing module 1527. Enhanced audio processing module 1527 may
be configured to process an input audio signal received by hearing
assist device 1501 to achieve a desired frequency response prior to
playing back such input audio signal to a wearer of hearing assist
device 1501. For example, enhanced audio processing module 1527 may
selectively amplify certain frequency components of an input audio
signal prior to playing back such input audio signal to the wearer.
The frequency response to be achieved may specified by or derived
from a prescription for the wearer that is provided to hearing
assist device 1501 by an external device or system. In certain
embodiments, such prescription may be formatted in a standardized
manner in order to facilitate use thereof by any of a variety of
hearing assistance devices and audio reproduction systems.
[0166] In accordance with a further embodiment in which hearing
assist device 1501 is worn in conjunction with a second hearing
assist device, enhanced audio processing module 1527 may modify a
first input audio signal received by hearing assist device 1501
prior to playback of the first input audio signal to one ear of the
wearer, while an enhanced audio processing module of the second
hearing assist device modifies a second input audio signal received
by the second hearing assist device prior to playback of the second
input audio signal to the other ear of the wearer. Such
modification of the first and second input audio signals can be
used to achieve enhanced spatial signaling for the wearer. That is
to say, the enhanced audio signals provided to both ears of the
wearer will enable the wearer to better determine the spatial
origin of sounds. Such enhancement is desirable for persons who
have a poor ability to detect the spatial origin of sound, and
therefore a poor ability to responds to spatial cues. To determine
the appropriate modifications for the left and right ear of the
wearer, an appropriate user-specific "head transfer function" can
be determined through testing of a user. The results of such
testing may then be used to calibrate the spatial audio enhancement
function applied at each ear.
[0167] The next section describes some further example
applications/embodiments for hearing assist devices.
VI. Further Example Applications of Hearing Assist Device
Embodiments
[0168] The hearing assist devices described above may used in
further applications, as well as variations on the above described
embodiments. In such applications, various health monitoring
technologies can be integrated into hearing aid devices as well as
into local and remote supporting devices and systems. Local systems
may comprise one or more smart phones, tablets, computers that are
portable or stationary. Such devices may have application software
installed (downloaded) therein to define supporting behaviors.
Local systems or devices may also comprise other dedicated health
care devices such as monitors, rate measuring devices, and so on
that may be stationary or be worn or carried by the user. Sensor
data collected by one or both of the hearing aid devices and local
supporting devices or systems can be used together to help provide
a basis for a more accurate diagnosis of a user's current
health.
[0169] For instance, in an embodiment, a hearing assist device may
be docked to a stationary docking station, or a mobile device may
be held adjacent to the hearing assist device (e.g., against the
ear of the user) to cause sensor data and/or other information to
be transmitted from the hearing assist device according to NFC
techniques, as well as to enable information to be received by the
hearing assist device.
[0170] Temperature information measured by a sensor of the hearing
assist device, and/or further sensed information, may be used to
determine whether the hearing aid is being worn by a user. If the
hearing assist device is determined to not be worn (e.g.,
temperature below human temperature is detected), processing logic
of the hearing assist device may cause the hearing assist device to
enter a low power state (e.g., with periodic flashing LED or audio
to support attempts to find a misplaced hearing aid).
[0171] Similarly, an elevated human temperature (e.g., a fever,
over 99 degrees Fahrenheit, etc.) may cause the processing logic to
power up communication circuitry within the hearing assist device,
which may in turn cause a remote device to power up and participate
in a data exchange with the hearing assist device. In this manner,
the elevated temperature may be reported to another person,
including medical personnel. In an embodiment, a temperature
extreme may cause a request to be transmitted to a remote device
(e.g., a smart phone) to dial 911, medical staff, or family
members.
[0172] Program code, applications, or "apps" may be downloaded to a
hearing assist device and stored in memory (e.g., in memory 734 of
FIG. 7 as code 738). Such applications can program different sensor
response functionality, tailoring the hearing assist device and/or
mobile computing device to service a particular user. For example,
sensor data (e.g., motion, heart rate, stress levels) may be
analyzed by processing logic along with recorded audio sounds
(e.g., sounds of pain, moaning, slurred words, a lack of sound), to
determine a lack of movement, stroke, heart attack, to cause a
request to be generated to dial 911 and/or a doctor immediately
(e.g., through a wireless link to a local access point or
phone).
[0173] In an embodiment, using NFC communications between the
hearing assist device and a smart phone, the smart phone can
determine a distance between the hearing assist device and the ear,
and processing logic of the hearing assist device adjust may adjust
broadcast sound accordingly. If there is no hearing assist device
in the user's ear, and the phone reliably identifies the correct
user (e.g., by camera, by sound, etc.), the phone may be configured
to compensate for hearing loss of the user (e.g., by amplifying
particular frequency ranges). The user may manually increase the
phone volume, and may select an icon or other user interface
mechanism to turn on hearing aid frequency compensation. Magnetic
field induction may also be used to communicate the audio
signal.
[0174] In another embodiment, an alarm clock signal delivered via a
hearing assist device may be configured to repeat until the user is
determined to be upright by processing logic of the hearing assist
device (e.g., based on measured information from position/motion
518). At this point, processing logic may cause a message (e.g.,
from memory 734) to be broadcast to the user, such as "All systems
stable. You are at home and it is 8 am. It is time to take your XYZ
pill." Such messages, alerts, etc., may be triggered by processing
logic in response to sensor data changes (e.g., emergencies, etc.),
smart phone interaction, and/or pressing a status button on the
hearing assist device. A user may be determined by processing logic
to have fallen down (e.g., based on measured information from
position/motion 518 that indicates an impact and/or user
orientation). At this point, processing logic may cause a message
(e.g., from memory 734) to be broadcast to the user, such as "are
you ok? Say yes if so, and no if injured." The processing logic of
the hearing assist device may then step the user through a
question/answer (Q/A) interaction that based on sensor data
circumstances to arrive at likelihood of needed medical
intervention (e.g., "Are you sweaty? Can you read a book at arm's
length? Can you read the letters? Shut one eye. Shut the other eye.
Do you feel any numbness?" Information regarding the Q/A
interaction may be transmitted to a medical staff member, who may
review the interaction and deliver their own voice to the user
through the hearing assist device and/or smart phone. A microphone
of the hearing assist device may be used to capture the user's
verbal responses, which may be delivered back to the medical staff
member (or a family member).
[0175] Such a communication flow may of course be carried out via
local cell phone device of the user, or via a back door channel
through a third party's cell phone (a third party device). Also
note that status message playback at the hearing assist device can
be triggered by voice recognized commands received from the user.
In addition, the hearing assist device and smart phone may use NFC
to transfer a call or other audio to the hearing assist device. A
skin pathway (e.g., via skin communication conductor 534) for
communications to the skin communication conductor 534 from a hand
held smart phone may be used. A doctor may remotely evaluate (and
control) hearing assist device performance/settings/battery, extra
collected health data, etc., and deliver audio, with or without
placing a call.
[0176] In addition to parallel text on a smart phone or other
hand-held device UI, voice signaling can be injected into the
hearing pathway (via speakers of the hearing assist device, or by
speakers of the mobile computing device). For example, a warning
message may be received from a smart environment for dangerous
items in that area (e.g., from access points, smart phones,
computers, sensors, etc.). The warning message may be played to the
ear of the user with background sounds suppressed (e.g., by DSP
730) to make sure that the user hears the warning message. An
intelligent mixing of sounds may be performed by DSP 730. For
instance, if the user is in a vehicle, the hearing assist device
may be configured to ensure that particular desired sounds are
heard clearly despite a sound level of the radio. The hearing
assist device and/or the vehicle itself may amplify certain desired
sounds or other sensor readings to the user, such as another
vehicle that is getting too close, or an obstacle detected in front
of the vehicle.
[0177] As described above, voice/speech recognition may be
incorporated into a hearing assist device to enable commands from
the user to be recognized and transmitted to a remote device under
certain circumstances. Such voice commands provided by the user may
be explicit (e.g., "contact my doctor"), or may be coded (e.g.,
saying "apple" to cause the hearing assist device to contact the
user's doctor) for various reasons, such as to avoid public
embarrassment regarding wearing a hearing aid. Furthermore, voice
of the user may be recognized by the hearing assist device, and
converted to a text message that is displayed to the user on a
mobile computing device, or transmitted to one or more intended
recipients. Furthermore, the mobile computing device may transmit
commands to the hearing assist device that are converted to audio
that is broadcast into the ear of the user by a speaker of the
hearing assist device (e.g., to provide a privacy mode). In an
embodiment, artificial reality may be augmented, where the mobile
computing device can provide extra information (e.g., by voice) to
the user based upon location and other aspects. For example, a
medical condition of the user may be detected by the hearing assist
device, as well as a location of the user, which may be used to
launch a web search to find a local medical clinic (e.g., contact
information, an address, etc.). Also, when the user is talking to
someone, the hearing assist device and/or mobile computing device
can train on the voice of a talker to support better filtering over
time.
[0178] As described above, the hearing assist device may implement
voice recognition that detects slurred or unusual speech patterns
of the user, which may indicate a potential medical condition of
the user. For instance, slurred speech and time of detection
information may prove critical when attempting to identify a window
of opportunity in which blood thinners may be useful in minimizing
brain damage due to a stroke.
[0179] In an embodiment, a hearing assist device may perform an
emergency call through a smart phone. For instance, if a person
finds a user that is unconscious, the person may place their smart
phone near the ear of the user, and the user's hearing assist
device may make an emergency communication through the smart phone.
The hearing assist device may gather sensor data to be used to
evaluate the user's health, and may relay this sensor data through
the smart phone to an emergency responder. The hearing assist
device may even provide commands to the person to perform on the
unconscious user (e.g., "feel the user's forehead," etc.).
[0180] As described above, injected voice may be provided by a
hearing assist device to a user. For instance, the user may be
listening to music that is transmitted to the hearing assist device
from a remote device (e.g., through Bluetooth.TM., the user's skin,
etc.). Voice provided by the hearing assist device may interrupt
the music to provide verbal information to the user, such as "your
blood pressure is dropping," "you have a fever," etc.
[0181] Furthermore, as described above, program code or "apps" may
be downloaded to a hearing assist device as well as to the remote
device(s). Upgrades to downloaded apps may also be downloaded. Such
downloads may be performed opportunistically to preserve battery
life. For instance, such downloads may be queued to be performed
when the hearing assist device is being charged (e.g., by a
proximate device providing an RF field, when it is placed in a
charger, etc.
VII. Conclusion
[0182] While various embodiments have been described above, it
should be understood that they have been presented by way of
example only, and not limitation. It will be apparent to persons
skilled in the relevant art that various changes in form and detail
can be made therein without departing from the spirit and scope of
the embodiments. Thus, the breadth and scope of the described
embodiments should not be limited by any of the above-described
exemplary embodiments, but should be defined only in accordance
with the following claims and their equivalents.
* * * * *