U.S. patent application number 14/225339 was filed with the patent office on 2014-10-02 for light-based detection for acoustic applications.
This patent application is currently assigned to AliphCom. The applicant listed for this patent is Gregory C. Burnett. Invention is credited to Gregory C. Burnett.
Application Number | 20140294208 14/225339 |
Document ID | / |
Family ID | 46831358 |
Filed Date | 2014-10-02 |
United States Patent
Application |
20140294208 |
Kind Code |
A1 |
Burnett; Gregory C. |
October 2, 2014 |
LIGHT-BASED DETECTION FOR ACOUSTIC APPLICATIONS
Abstract
A light-based skin contact detector is described, including a
boot having an index of refraction less than or equal to another
index of refraction associated with skin at a frequency of light, a
light emitter and detector coupled to the boot and configured to
measure an amount of light energy reflected by an interface of the
boot, and a digital signal processor configured to detect a change
in the amount of light energy reflected by the interface.
Inventors: |
Burnett; Gregory C.;
(Northfield, MN) |
|
Applicant: |
Name |
City |
State |
Country |
Type |
Burnett; Gregory C. |
Northfield |
MN |
US |
|
|
Assignee: |
AliphCom
San Francisco
CA
|
Family ID: |
46831358 |
Appl. No.: |
14/225339 |
Filed: |
March 25, 2014 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
13420568 |
Mar 14, 2012 |
|
|
|
14225339 |
|
|
|
|
12139333 |
Jun 13, 2008 |
8503691 |
|
|
13420568 |
|
|
|
|
12243718 |
Oct 1, 2008 |
8130984 |
|
|
13420568 |
|
|
|
|
10769302 |
Jan 30, 2004 |
7433484 |
|
|
12243718 |
|
|
|
|
61452948 |
Mar 15, 2011 |
|
|
|
60934551 |
Jun 13, 2007 |
|
|
|
60953444 |
Aug 1, 2007 |
|
|
|
60954712 |
Aug 8, 2007 |
|
|
|
61045377 |
Apr 16, 2008 |
|
|
|
60443818 |
Jan 30, 2003 |
|
|
|
Current U.S.
Class: |
381/151 |
Current CPC
Class: |
G10L 2021/02165
20130101; H04R 1/1083 20130101; H04R 1/406 20130101; H04R 3/005
20130101; G10L 21/0208 20130101; H04R 1/46 20130101; H04R 1/1008
20130101; H04R 3/04 20130101; H04R 23/008 20130101; A61B 5/6815
20130101 |
Class at
Publication: |
381/151 |
International
Class: |
H04R 23/00 20060101
H04R023/00; H04R 1/46 20060101 H04R001/46 |
Claims
1. A device, comprising: a boot comprising an index of refraction
less than or equal to another index of refraction associated with
skin at a frequency of light; a light emitter and detector coupled
to the boot and configured to measure an amount of light energy
reflected by an interface of the boot; and a digital signal
processor configured to detect a change in the amount of light
energy reflected by the interface.
2. The device of claim 1, further comprising a microphone
configured to detect speech.
3. The device of claim 1, wherein the light emitter and detector is
mounted to a side of the boot.
4. The device of claim 1, wherein the boot comprises a moldable
material.
5. The device of claim 1, wherein the boot comprises medical-grade
silicone rubber.
6. The device of claim 1, wherein the boot comprises an optically
clear gel.
7. The device of claim 1, wherein the boot comprises a
hypoallergenic surface.
8. The device of claim 1, wherein the index of refraction of the
boot is approximately 1.4 for infrared light.
9. The device of claim 1, wherein the light emitter and detector
comprises an infrared light-emitting diode pair.
10. The device of claim 1, wherein the amount of the light energy
reflected by the interface of the boot is determined based on a
shape of the boot and movement of the skin interfacing with the
boot.
11. The device of claim 1, wherein the light emitter and detector
is configured to detect contact between the boot and the skin.
12. The device of claim 1, wherein the light emitter and detector
is configured to detect the amount of the light energy reflected
within the boot relative to the shape of the boot and movement of
the skin interfacing with the boot.
Description
CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] This application is a Divisional of U.S. patent application
Ser. No. 13/420,568, filed Mar. 14, 2012, which claims the benefit
of U.S. Provisional Patent Application No. 61/452,948, filed Mar.
15, 2011; U.S. patent application Ser. No. 13/420,568 also is a
continuation-in-part of U.S. patent application Ser. No.
12/139,333, filed Jun. 13, 2008, which claims the benefit of U.S.
Provisional Patent Application No. 60/934,551, filed Jun. 13, 2007,
U.S. Provisional Patent Application No. 60/953,444, filed Aug. 1,
2007, U.S. Provisional Patent Application No. 60/954,712, filed
Aug. 8, 2007, and U.S. Provisional Patent Application No.
61/045,377, filed Apr. 16, 2008; U.S. patent application Ser. No.
13/420,568 also is a continuation-in-part of U.S. patent
application Ser. No. 12/243,718, filed Oct. 1, 2008, which is a
continuation of U.S. patent application Ser. No. 10/769,302, filed
Jan. 30, 2004, which claims the benefit of U.S. Provisional Patent
Application No. 60/443,818, filed Jan. 30, 2003; all of which are
herein incorporated by reference for all purposes.
FIELD
[0002] The disclosure herein relates generally to optics,
communications and, more specifically, light-based detection for
acoustic applications.
BACKGROUND
[0003] Skin-located vibration transducers have been in use for some
time. However, conventional solutions have difficulty operating
where there is inadequate or insufficient skin contact. Also,
conventional solutions are not effective for directly measuring the
speech of a user through his or her skin to allow the thorough
removal of noise from speech without distorting the speech. Thus,
what is needed is light-based detection for acoustic applications
without the limitations of conventional solutions
BRIEF DESCRIPTION OF THE DRAWINGS
[0004] FIG. 1 depicts Snell's Law with an incident light wave
reflected at the n.sub.1/n.sub.2 boundary at an angle equal to the
angle of incidence, and transmitted (refracted) for certain values
of .theta..sub.1 at an angle .theta..sub.R, as known in the
art.
[0005] FIG. 2 shows a contact sensor open to the air so that the
large difference between n.sub.B and n.sub.0 means that much of the
light energy is reflected back into the boot, under an
embodiment.
[0006] FIG. 3 shows a contact sensor adjacent or next to a user's
skin such that the small difference between n.sub.B and n.sub.s
increases or eliminates the critical angle and raises the
transmission energy ratio so that much more energy escapes from the
boot into the skin of the user, decreasing the energy in the boot,
under an embodiment.
[0007] FIG. 4 is a table showing the effect of different user skin
indices of refraction, as determined for the critical angle, R, T,
and effective T (assuming random incidence angle), under an
embodiment.
[0008] FIG. 5 is a table showing the variables of interest for the
boot/air interface, under an embodiment.
[0009] FIG. 6 is a two-microphone adaptive noise suppression
system, under an embodiment.
[0010] FIG. 7 is an array and speech source (S) configuration,
under an embodiment. The microphones are separated by a distance
approximately equal to 2d.sub.0, and the speech source is located a
distance d.sub.s away from the midpoint of the array at an angle
.theta.. The system is axially symmetric so only d.sub.s and
.theta. need be specified.
[0011] FIG. 8 is a block diagram for a first order gradient
microphone using two omnidirectional elements O.sub.1 and O.sub.2,
under an embodiment.
[0012] FIG. 9 is a block diagram for a DOMA including two physical
microphones configured to form two virtual microphones V.sub.1 and
V.sub.2, under an embodiment.
[0013] FIG. 10 is a block diagram for a DOMA including two physical
microphones configured to form N virtual microphones V.sub.1
through V.sub.N, where N is any number greater than one, under an
embodiment.
[0014] FIG. 11 is an example of a headset or head-worn device that
includes the DOMA, as described herein, under an embodiment.
[0015] FIG. 12 is a flow diagram for denoising acoustic signals
using the DOMA, under an embodiment.
[0016] FIG. 13 is a flow diagram for forming the DOMA, under an
embodiment.
[0017] FIG. 14 is a plot of linear response of virtual microphone
V.sub.2 to a 1 kHz speech source at a distance of 0.1 m, under an
embodiment. The null is at 0 degrees, where the speech is normally
located.
[0018] FIG. 15 is a plot of linear response of virtual microphone
V.sub.2 to a 1 kHz noise source at a distance of 1.0 m, under an
embodiment. There is no null and all noise sources are
detected.
[0019] FIG. 16 is a plot of linear response of virtual microphone
V.sub.1 to a 1 kHz speech source at a distance of 0.1 m, under an
embodiment. There is no null and the response for speech is greater
than that shown in FIG. 19.
[0020] FIG. 17 is a plot of linear response of virtual microphone
V.sub.1 to a 1 kHz noise source at a distance of 1.0 m, under an
embodiment. There is no null and the response is very similar to
V.sub.2 shown in FIG. 20.
[0021] FIG. 18 is a plot of linear response of virtual microphone
V.sub.1 to a speech source at a distance of 0.1 m for frequencies
of 100, 500, 1000, 2000, 3000, and 4000 Hz, under an
embodiment.
[0022] FIG. 19 is a plot showing comparison of frequency responses
for speech for the array of an embodiment and for a conventional
cardioids microphone.
[0023] FIG. 20 is a plot showing speech response for V.sub.1 (top,
dashed) and V.sub.2 (bottom, solid) versus B with d.sub.s assumed
to be 0.1 m, under an embodiment. The spatial null in V.sub.2 is
relatively broad.
[0024] FIG. 20 is a plot showing speech response for V.sub.1 (top,
dashed) and V.sub.2 (bottom, solid) versus B with d.sub.s assumed
to be 0.1 m, under an embodiment. The spatial null in V.sub.2 is
relatively broad.
[0025] FIG. 21 is a plot showing a ratio of V.sub.1/V.sub.2 speech
responses shown in FIG. 10 versus B, under an embodiment. The ratio
is above 10 dB for all 0.8<B<1.1. This means that the
physical .beta. of the system need not be exactly modeled for good
performance.
[0026] FIG. 22 is a plot of B versus actual d.sub.s assuming that
d.sub.s=10 cm and theta=0, under an embodiment.
[0027] FIG. 23 is a plot of B versus theta with d.sub.s=10 cm and
assuming d.sub.s=10 cm, under an embodiment.
[0028] FIG. 24 is a plot of amplitude (top) and phase (bottom)
response of N(s) with B=1 and D=-7.2 .mu.sec, under an embodiment.
The resulting phase difference clearly affects high frequencies
more than low.
[0029] FIG. 25 is a plot of amplitude (top) and phase (bottom)
response of N(s) with B=1.2 and D=-7.2 .mu.sec, under an
embodiment. Non-unity B affects the entire frequency range.
[0030] FIG. 26 is a plot of amplitude (top) and phase (bottom)
response of the effect on the speech cancellation in V.sub.2 due to
a mistake in the location of the speech source with theta1=0
degrees and theta2=30 degrees, under an embodiment. The
cancellation remains below -10 dB for frequencies below 6 kHz.
[0031] FIG. 27 is a plot of amplitude (top) and phase (bottom)
response of the effect on the speech cancellation in V.sub.2 due to
a mistake in the location of the speech source with theta1=0
degrees and theta2=45 degrees, under an embodiment. The
cancellation is below -10 dB for frequencies below about 2.8 kHz
and a reduction in performance is expected.
[0032] FIG. 28 shows experimental results for a 2d.sub.0=19 mm
array using a linear 0 of 0.83 on a Bruel and Kjaer Head and Torso
Simulator (HATS) in very loud (.about.85 dBA) music/speech noise
environment, under an embodiment. The noise has been reduced by
about 25 dB and the speech hardly affected, with no noticeable
distortion.
[0033] FIG. 29 is a cross section view of an acoustic vibration
sensor, under an embodiment.
[0034] FIG. 30A is an exploded view of an acoustic vibration
sensor, under the embodiment of FIG. 29.
[0035] FIG. 30B is perspective view of an acoustic vibration
sensor, under the embodiment of FIG. 29.
[0036] FIG. 31 is a schematic diagram of a coupler of an acoustic
vibration sensor, under the embodiment of FIG. 29.
[0037] FIG. 32 is an exploded view of an acoustic vibration sensor,
under an alternative embodiment.
[0038] FIG. 33 shows representative areas of sensitivity on the
human head appropriate for placement of the acoustic vibration
sensor, under an embodiment.
[0039] FIG. 34 is a generic headset device that includes an
acoustic vibration sensor placed at any of a number of locations,
under an embodiment.
[0040] FIG. 35 is a diagram of a manufacturing method for an
acoustic vibration sensor, under an embodiment.
DETAILED DESCRIPTION
[0041] Systems and methods are described herein for detection for
acoustic applications (e.g., detecting skin contact) using a gel
boot. Embodiments described herein include a boot constructed so
that it has an index of refraction close to (e.g., less than) the
index of refraction of the user's skin at the frequency of light
desired. A light emitter and detector (e.g., an infrared LED pair)
are fitted into the side of the boot. Before skin contact, a
significant amount of energy is reflected by the boot/air interface
and the detector measures that energy. Upon skin contact, the
increase or elimination of the critical angle and the closer index
of reflection match of the boot/skin interface causes the amount of
light energy detected by the detector to drop significantly.
Digital signal processing methods are used to detect the change in
light energy inside the boot.
[0042] In the following description, numerous specific details are
introduced to provide a thorough understanding of, and enabling
description for, embodiments. One skilled in the relevant art,
however, may recognize that these embodiments can be practiced
without one or more of the specific details, or with other
components, systems, etc. In other instances, well-known structures
or operations are not shown, or are not described in detail, to
avoid obscuring aspects of the disclosed embodiments.
[0043] Unless otherwise specified, the following terms have the
corresponding meanings in addition to any meaning or understanding
they may convey to one skilled in the art. The term "infrared" (IR)
is understood to be infrared light at a wavelength ranging from
approximately 0.7 to 300 micrometers.
[0044] The vibration transducer is a very useful communication tool
in both military and consumer markets. The ability to directly
measure the speech of the user through his or her skin is very
useful, allowing a properly configured system to thoroughly remove
noise from speech without distorting the speech; such a system is
available from AliphCom of San Francisco, Calif., and is described
in detail herein and in U.S. patent application Ser. No.
12/139,333, which is herein incorporated by reference for all
purposes.
[0045] One such transducer is the skin surface microphone (SSM),
available from AliphCom of San Francisco, Calif., and described in
detail herein and in U.S. patent application Ser. No. 12/243,718,
which is herein incorporated by reference for all purposes. The
AliphCom SSM uses a modified acoustic microphone and a flexible
boot to connect the skin of the user to the interior of the
microphone. The boot is generally constructed of medical-grade
silicone rubber or a similar material designed for long-term,
hypoallergenic contact with human skin, but is not so limited. The
boot can be made to have an index of refraction close to that of
human skin (e.g., approximately 1.4 for infrared light) so that
light transmitted through the boot is preferentially transmitted
into the skin of the user when compared to transmission from the
boot to air. This difference in transmission can be detected and
used to form a contact/no contact signal for use in speech
detection and similar technologies.
[0046] Embodiments described herein include a method of detecting
skin contact using a light-based method. More specifically,
embodiments herein include a method of detecting skin contact with
a gel boot (referred to herein as "the boot"). The boot of an
embodiment has an index of refraction close to (e.g., less than)
the index of refraction of the user's skin at the frequency of
light desired. For example, the boot of an embodiment has an index
of refraction less than the index of refraction of the user's skin
at the frequency of light desired, but the embodiments are not so
limited.
[0047] The boot of an embodiment includes a light emitter and
detector housed in or fitted into the side of the boot. The light
emitter and detector of an embodiment comprise an infrared light
emitting diode (LED) pair, but the embodiments are not so limited.
Before skin contact, a significant amount of energy is reflected by
the boot/air interface and the detector measures that energy. Upon
skin contact, the increase or elimination of the critical angle and
the closer index of reflection match of the boot/skin interface
causes the amount of light energy detected by the detector to drop
significantly, and conventional digital signal processing (DSP)
methods can be used to detect the change in light energy inside the
boot.
[0048] Considering the theory of reflection and refraction at an
interface, FIG. 1 shows the interaction of a light ray within a
first substance with index of refraction n.sub.1 and a second
substance with an index of refraction n.sub.2, as known in the art.
The boundary line is considered planar, and the angle between a
line perpendicular to the boundary line and the incoming ray is
termed the angle of incidence and is labeled using .theta..sub.i.
The corresponding line between a line perpendicular to the boundary
line and the outgoing ray is called the angle of refraction and is
labeled using .theta..sub.R. Snell's law states that
sin ( .THETA. i ) sin ( .THETA. R ) = n 2 n 1 ##EQU00001##
[0049] In addition, light is reflected back into the first
substance at the interface at the same angle it is incident. When
traveling to an index of refraction that is less than the current
index, (e.g., n.sub.1>n.sub.2, such as from a diamond to air)
the wave is completely reflected at angles of incidence greater
than the critical angle, defined by
.theta. crit = sin - 1 ( n 2 n 1 ) ##EQU00002## n 1 > n 2
##EQU00002.2##
[0050] If the angle of incidence is above the critical angle, the
light ray is completely reflected at the boundary, a condition
termed total internal reflection. If the angle of incidence is
below the critical angle (e.g., n.sub.1<n.sub.2, so there is no
critical angle), the percentage of energy reflected at the
interface is
R = ( n 1 - n 2 n 1 + n 2 ) 2 ##EQU00003##
and the ratio of energy transmitted (T) is 1-R.
[0051] For infrared light, the index of refraction of the skin is
approximately 1.4. The index of refraction of air is 1.0. Thus, to
ensure maximum transmission of the light into the skin, the boot of
an embodiment has an index of refraction between 1 and 1.4. If it
is assumed that the boot (n.sub.1) has an index of refraction of
1.4, and that the skin varies between 1.2 and 1.6, then the table
of FIG. 4 shows the effect of different user skin indices of
refraction, as determined for the critical angle, R, T, and
effective T (assuming random incidence angle), under an
embodiment.
[0052] Therefore, if the boot has an index of refraction of 1.4,
and the skin index of refraction varies from 1.2 to 1.4, then the
effective energy ratio transmitted at the boot/skin interface may
vary between 0.65 and 1.0. This means that between 0 and 35% of the
energy incident on the boot/skin interface may be reflected back
into the boot. Thus it is desirable to have the index of refraction
of the boot be less than that of skin (n.sub.1<n.sub.2) so that
there is no critical angle and almost all of the light energy in
the boot is able to escape when it touches the skin.
[0053] For the situation when the boot (n.sub.1=1.4) is not against
the skin of the user, n.sub.2=1.0, the table of FIG. 5 shows the
variables of interest for the boot/air interface, under an
embodiment. This means that for light scattered around randomly
inside the boot, about 50% of the incident light may be internally
reflected at the boot/air interface. This compares to 0 to 35% of
the light reflected at the boot/skin interface. Therefore, as long
as the boot has an index of refraction close to or less than that
of the user's skin, the change in IR energy inside the boot may be
detectable and can be used to generate a contact/no contact
signal.
[0054] A majority of the increase in transmission energy (and
subsequent drop in energy inside the boot) is due to the increase
in (for boot indices of refraction less than that of skin) or
elimination of (for boot indices greater than that of skin) the
critical angle. The amount of energy transmitted in either case is
near 100%, so constructing the boot with an index of refraction
less than that of the skin of the user may result in a larger (e.g.
more easily detectable) change of energy inside the boot.
[0055] FIG. 2 and FIG. 3 show an embodiment for detecting skin
contact using a light-based method, under an embodiment. FIG. 2
shows a contact sensor open to the air so that the large difference
between n.sub.B and n.sub.0 means that much of the light energy is
reflected back into the boot, under an embodiment. FIG. 3 shows a
contact sensor adjacent or next to a user's skin such that the
small difference between n.sub.B and n.sub.s increases or
eliminates the critical angle and raises the transmission energy
ratio so that much more energy escapes from the boot into the skin
of the user, decreasing the energy in the boot, under an
embodiment.
[0056] The method for detecting skin contact is used in a headset,
for example, the Jawbone.RTM. Icon.TM. available from AliphCom of
San Francisco, Calif. An SSM microphone and boot are used as a
vibration detector but the embodiment is not so limited. In fact,
any vibration detector (including none at all) can be present as
long as the embodiment uses a boot that has an index of refraction
near that of skin. For example, the boot of an embodiment has an
index of refraction slightly higher than the index of refraction of
skin, but is not so limited.
[0057] An IR emitter/detector pair is mounted in proximity to
(e.g., touching) the boot. The IR emitter/detector pair of an
embodiment may be the Lite-On LTR-301 and LTR-302
(http://www.us.liteon.com/opto.index.html), but the embodiment is
not so limited. While the IR emitter/detector pair is recommended
for cost and availability reasons, any type of light can be used in
an embodiment. Alternatively, the detector and emitter can be
located on different sides of the boot; they need not be adjacent
one another.
[0058] The boot of an embodiment is constructed with a moldable
material that has an index of refraction approximately equal to or
slightly less than the index of refraction of the skin at the
frequency of the light used. For typical infrared light (e.g., 940
nm wavelength) the index of refraction of the boot should be
between approximately 1.0 and 1.4. For example, the boot can be
constructed using LS-3140, an optically clear encapsulation gel
that has an index of refraction of approximately 1.4 and is
available from NuSil
(http://www.nusil.com/products/engineering/photonics/optical_gels.aspx).
For best results, the emitter and detector are fitted to the boot
so that they contact as much area of the boot as possible, but the
embodiment is not so limited.
[0059] FIG. 2 shows an embodiment having the boot open to the air,
and with a critical angle of approximately 46 degrees, under an
embodiment. In this embodiment, approximately half of the light
incident on the boot/air interface is reflected back inside the
boot. The detector measures this energy and a signal is generated
to represent this energy level.
[0060] When the boot is placed on the skin as shown in FIG. 3, the
critical angle is increased or eliminated and the amount of energy
transmitted at the boot/skin interface is greatly increased. This
causes the amount of energy inside the boot to drop, and this
decrease is reflected by the detector signal. Conventional digital
signal processes (such as smoothed energy detection with a fixed
threshold) are applied to detect the change in energy level inside
the boot caused by the interaction of the boot and the skin. Even
small gaps between the boot and the skin may be detectable using
this method.
[0061] Embodiments for detecting skin contact with a gel boot
described herein include a boot constructed so that it has an index
of refraction close to (e.g., less than) the index of refraction of
the user's skin at the frequency of light desired. A light emitter
and detector (e.g., an infrared LED pair) are fitted into the side
of the boot. Before skin contact, a significant amount of energy is
reflected by the boot/air interface and the detector measures that
energy. Upon skin contact, the increase or elimination of the
critical angle and the closer index of reflection match of the
boot/skin interface causes the amount of light energy detected by
the detector to drop significantly. Digital signal processing
methods are used to detect the change in light energy inside the
boot.
[0062] A dual omnidirectional microphone array (DOMA) that provides
improved noise suppression is described herein. Compared to
conventional arrays and algorithms, which seek to reduce noise by
nulling out noise sources, the array of an embodiment is used to
form two distinct virtual directional microphones which are
configured to have very similar noise responses and very dissimilar
speech responses. The null formed by the DOMA is one used to remove
the speech of the user from V.sub.2. The two virtual microphones of
an embodiment can be paired with an adaptive filter algorithm
and/or VAD algorithm to significantly reduce the noise without
distorting the speech, significantly improving the SNR of the
desired speech over conventional noise suppression systems. The
embodiments described herein are stable in operation, flexible with
respect to virtual microphone pattern choice, and have proven to be
robust with respect to speech source-to-array distance and
orientation as well as temperature and calibration techniques.
[0063] Unless otherwise specified, the following terms have the
corresponding meanings in addition to any meaning or understanding
they may convey to one skilled in the art.
[0064] The term "bleedthrough" means the undesired presence of
noise during speech.
[0065] The term "denoising" means removing unwanted noise from
Mic1, and also refers to the amount of reduction of noise energy in
a signal in decibels (dB).
[0066] The term "devoicing" means removing/distorting the desired
speech from Mic1.
[0067] The term "directional microphone (DM)" means a physical
directional microphone that is vented on both sides of the sensing
diaphragm.
[0068] The term "Mic1 (M1)" means a general designation for an
adaptive noise suppression system microphone that usually contains
more speech than noise.
[0069] The term "Mic2 (M2)" means a general designation for an
adaptive noise suppression system microphone that usually contains
more noise than speech.
[0070] The term "noise" means unwanted environmental acoustic
noise.
[0071] The term "null" means a zero or minima in the spatial
response of a physical or virtual directional microphone.
[0072] The term "O.sub.1" means a first physical omnidirectional
microphone used to form a microphone array.
[0073] The term "O.sub.2" means a second physical omnidirectional
microphone used to form a microphone array.
[0074] The term "speech" means desired speech of the user.
[0075] The term "Skin Surface Microphone (SSM)" is a microphone
used in an earpiece (e.g., the Jawbone.RTM. earpiece available from
AliphCom of San Francisco, Calif.) to detect speech vibrations on
the user's skin.
[0076] The term "V.sub.1" means the virtual directional "speech"
microphone, which has no nulls.
[0077] The term "V.sub.2" means the virtual directional "noise"
microphone, which has a null for the user's speech.
[0078] The term "Voice Activity Detection (VAD) signal" means a
signal indicating when user speech is detected.
[0079] The term "virtual microphones (VM)" or "virtual directional
microphones" means a microphone constructed using two or more
omnidirectional microphones and associated signal processing.
[0080] FIG. 6 is a two-microphone adaptive noise suppression system
600, under an embodiment. The two-microphone system 600 including
the combination of physical microphones MIC 1 and MIC 2 along with
the processing or circuitry components to which the microphones
couple (described in detail below, but not shown in this figure) is
referred to herein as the dual omnidirectional microphone array
(DOMA) 610, but the embodiment is not so limited. Referring to FIG.
6, in analyzing the single noise source 601 and the direct path to
the microphones, the total acoustic information coming into MIC 1
(602, which can be an physical or virtual microphone) is denoted by
m.sub.1(n). The total acoustic information coming into MIC 2 (603,
which can also be an physical or virtual microphone) is similarly
labeled m.sub.2(n). In the z (digital frequency) domain, these are
represented as M.sub.1(z) and M.sub.2(z). Then,
M.sub.1(z)=S(z)+N.sub.2(z)
M.sub.2(z)=N(z)+S.sub.2(z)
with
N.sub.2(z)=N(z)H.sub.1(z)
S.sub.2(z)=S(z)H.sub.2(z),
so that
M.sub.1(z)=S(z)+N(z)H.sub.1(z)
M.sub.2(z)=N(z)+S(z)H.sub.2(z). Eq. 1
This is the general case for all two microphone systems. Equation 1
has four unknowns and two known relationships and therefore cannot
be solved explicitly.
[0081] However, there is another way to solve for some of the
unknowns in Equation 1. The analysis starts with an examination of
the case where the speech is not being generated, that is, where a
signal from the VAD subsystem 604 (optional) equals zero. In this
case, s(n)=S(z)=0, and Equation 1 reduces to
M.sub.1N(z)=N(z)H.sub.1(z)
M.sub.2N(z)=N(z),
where the N subscript on the M variables indicate that primarily or
only noise is being received. This leads to
M 1 N ( z ) = M 2 N ( z ) H 1 ( z ) H 1 ( z ) = M 1 N ( z ) M 2 N (
z ) . Eq . 2 ##EQU00004##
[0082] The function H.sub.1(z) can be calculated using any of the
available system identification algorithms and the microphone
outputs when the system is certain that primarily or only noise is
being received. The calculation can be done adaptively, so that the
system can react to changes in the noise.
[0083] A solution is now available for H.sub.1(z), one of the
unknowns in Equation 1. The final unknown, H.sub.2(z), can be
determined by using the instances where speech is being produced
and the VAD equals one. When this is occurring, but the recent
(perhaps less than 1 second) history of the microphones indicate
low levels of noise, it can be assumed that n.sub.s=N (z).about.0.
Then Equation 1 reduces to
M.sub.1S(z)=S(z)
M.sub.2S(z)=S(z)H.sub.2(z)
which in turn leads to
M 2 S ( z ) = M 1 S ( z ) H 2 ( z ) ##EQU00005## H 2 ( z ) = M 2 S
( z ) M 1 S ( z ) , ##EQU00005.2##
which is the inverse of the H.sub.1(z) calculation. However, it is
noted that different inputs are being used (now primarily or only
the speech is occurring whereas before primarily or only the noise
was occurring). While calculating H.sub.2(z), the values calculated
for H.sub.1(z) are held constant (and vice versa) and it is assumed
that the noise level is not high enough to cause errors in the
H.sub.2(z) calculation.
[0084] After calculating H.sub.1(z) and H.sub.2(z), they are used
to remove the noise from the signal. If Equation 1 is rewritten
as
S(z)=M.sub.1(z)-N(z)H.sub.1(z)
N(z)=M.sub.2(z)-S(z)H.sub.2(z)
S(z)=M.sub.1(z)[M.sub.2(z)-S(z)H.sub.2(z)]H.sub.1(z)
S(z)[1-H.sub.2(z)H.sub.1(z)]=M.sub.1(z)-M.sub.2(z)H.sub.1(z),
then N (z) may be substituted as shown to solve for S(z) as
S ( z ) = M 1 ( z ) - M 2 ( z ) H 1 ( z ) 1 - H 1 ( z ) H 2 ( z ) .
Eq . 3 ##EQU00006##
[0085] If the transfer functions H.sub.1(z) and H.sub.2(z) can be
described with sufficient accuracy, then the noise can be
completely removed and the original signal recovered. This remains
true without respect to the amplitude or spectral characteristics
of the noise. If there is very little or no leakage from the speech
source into M.sub.2, then H.sub.2(z).about.0 and Equation 3 reduces
to
S(z).apprxeq.M.sub.1(z)-M.sub.2(z)H.sub.1(z). Eq. 4
[0086] Equation 4 is much simpler to implement and is very stable,
assuming H.sub.1(z) is stable. However, if significant speech
energy is in M.sub.2(z), devoicing can occur. In order to construct
a well-performing system and use Equation 4, consideration is given
to the following conditions:
[0087] R1. Availability of a perfect (or at least very good) VAD in
noisy conditions
[0088] R2. Sufficiently accurate H.sub.1(z)
[0089] R3. Very small (ideally zero) H.sub.2(z).
[0090] R4. During speech production, H.sub.1(z) cannot change
substantially.
[0091] R5. During noise, H.sub.2(z) cannot change
substantially.
[0092] Condition R1 is easy to satisfy if the SNR of the desired
speech to the unwanted noise is high enough. "Enough" means
different things depending on the method of VAD generation. If a
VAD vibration sensor is used, as in Burnett U.S. Pat. No.
7,256,048, accurate VAD in very low SNRs (-10 dB or less) is
possible. Acoustic-only methods using information from O.sub.1 and
O.sub.2 can also return accurate VADs, but are limited to SNRs of
.about.3 dB or greater for adequate performance.
[0093] Condition R5 is normally simple to satisfy because for most
applications the microphones may not change position with respect
to the user's mouth very often or rapidly. In those applications
where it may happen (such as hands-free conferencing systems) it
can be satisfied by configuring Mic2 so that
H.sub.2(z).apprxeq.0.
[0094] Satisfying conditions R2, R3, and R4 are more difficult but
are possible given the right combination of V.sub.1 and V.sub.2.
Methods are examined below that have proven to be effective in
satisfying the above, resulting in excellent noise suppression
performance and minimal speech removal and distortion in an
embodiment.
[0095] The DOMA, in various embodiments, can be used with the
Pathfinder system as the adaptive filter system or noise removal.
The Pathfinder system, available from AliphCom, San Francisco,
Calif., is described in detail in other patents and patent
applications referenced herein. Alternatively, any adaptive filter
or noise removal algorithm can be used with the DOMA in one or more
various alternative embodiments or configurations.
[0096] When the DOMA is used with the Pathfinder system, the
Pathfinder system generally provides adaptive noise cancellation by
combining the two microphone signals (e.g., Mic1, Mic2) by
filtering and summing in the time domain. The adaptive filter
generally uses the signal received from a first microphone of the
DOMA to remove noise from the speech received from at least one
other microphone of the DOMA, which relies on a slowly varying
linear transfer function between the two microphones for sources of
noise. Following processing of the two channels of the DOMA, an
output signal is generated in which the noise content is attenuated
with respect to the speech content, as described in detail
below.
[0097] FIG. 7 is a generalized two-microphone array (DOMA)
including an array 701/702 and speech source S configuration, under
an embodiment. FIG. 8 is a system 800 for generating or producing a
first order gradient microphone V using two omnidirectional
elements O.sub.1 and O.sub.2, under an embodiment. The array of an
embodiment includes two physical microphones 701 and 702 (e.g.,
omnidirectional microphones) placed a distance 2d.sub.0 apart and a
speech source 700 is located a distance d.sub.s away at an angle of
.theta.. This array is axially symmetric (at least in free space),
so no other angle is needed. The output from each microphone 701
and 702 can be delayed (z.sub.1 and z.sub.2), multiplied by a gain
(A.sub.1 and A.sub.2), and then summed with the other as
demonstrated in FIG. 8. The output of the array is or forms at
least one virtual microphone, as described in detail below. This
operation can be over any frequency range desired. By varying the
magnitude and sign of the delays and gains, a wide variety of
virtual microphones (VMs), also referred to herein as virtual
directional microphones, can be realized. There are other methods
known to those skilled in the art for constructing VMs but this is
a common one and may be used in the enablement below.
[0098] As an example, FIG. 9 is a block diagram for a DOMA 900
including two physical microphones configured to form two virtual
microphones V.sub.1 and V.sub.2, under an embodiment. The DOMA
includes two first order gradient microphones V.sub.1 and V.sub.2
formed using the outputs of two microphones or elements O.sub.1 and
O.sub.2 (701 and 702), under an embodiment. The DOMA of an
embodiment includes two physical microphones 701 and 702 that are
omnidirectional microphones, as described above with reference to
FIGS. 7 and 8. The output from each microphone is coupled to a
processing component 902, or circuitry, and the processing
component outputs signals representing or corresponding to the
virtual microphones V.sub.1 and V.sub.2.
[0099] In this example system 900, the output of physical
microphone 701 is coupled to processing component 902 that includes
a first processing path that includes application of a first delay
z.sub.11 and a first gain A.sub.11 and a second processing path
that includes application of a second delay z.sub.12 and a second
gain A.sub.12. The output of physical microphone 702 is coupled to
a third processing path of the processing component 902 that
includes application of a third delay z.sub.21 and a third gain
A.sub.21 and a fourth processing path that includes application of
a fourth delay z.sub.22 and a fourth gain A.sub.22. The output of
the first and third processing paths is summed to form virtual
microphone V.sub.1, and the output of the second and fourth
processing paths is summed to form virtual microphone V.sub.2.
[0100] As described in detail below, varying the magnitude and sign
of the delays and gains of the processing paths leads to a wide
variety of virtual microphones (VMs), also referred to herein as
virtual directional microphones, can be realized. While the
processing component 902 described in this example includes four
processing paths generating two virtual microphones or microphone
signals, the embodiment is not so limited. For example, FIG. 10 is
a block diagram for a DOMA 1000 including two physical microphones
configured to form N virtual microphones V.sub.1 through V.sub.N,
where N is any number greater than one, under an embodiment. Thus,
the DOMA can include a processing component 1002 having any number
of processing paths as appropriate to form a number N of virtual
microphones.
[0101] The DOMA of an embodiment can be coupled or connected to one
or more remote devices. In a system configuration, the DOMA outputs
signals to the remote devices. The remote devices include, but are
not limited to, at least one of cellular telephones, satellite
telephones, portable telephones, wireline telephones, Internet
telephones, wireless transceivers, wireless communication radios,
personal digital assistants (PDAs), personal computers (PCs),
headset devices, head-worn devices, and earpieces.
[0102] Furthermore, the DOMA of an embodiment can be a component or
subsystem integrated with a host device. In this system
configuration, the DOMA outputs signals to components or subsystems
of the host device. The host device includes, but is not limited
to, at least one of cellular telephones, satellite telephones,
portable telephones, wireline telephones, Internet telephones,
wireless transceivers, wireless communication radios, personal
digital assistants (PDAs), personal computers (PCs), headset
devices, head-worn devices, and earpieces.
[0103] As an example, FIG. 11 is an example of a headset or
head-worn device 1100 that includes the DOMA, as described herein,
under an embodiment. The headset 1100 of an embodiment includes a
housing having two areas or receptacles (not shown) that receive
and hold two microphones (e.g., O.sub.1 and O.sub.2). The headset
1100 is generally a device that can be worn by a speaker 1102, for
example, a headset or earpiece that positions or holds the
microphones in the vicinity of the speaker's mouth. The headset
1100 of an embodiment places a first physical microphone (e.g.,
physical microphone O.sub.1) in a vicinity of a speaker's lips. A
second physical microphone (e.g., physical microphone O.sub.2) is
placed a distance behind the first physical microphone. The
distance of an embodiment is in a range of a few centimeters behind
the first physical microphone or as described herein (e.g.,
described with reference to FIGS. 6-10). The DOMA is symmetric and
is used in the same configuration or manner as a single close-talk
microphone, but is not so limited.
[0104] FIG. 12 is a flow diagram for denoising 1200 acoustic
signals using the DOMA, under an embodiment. The denoising 1200
begins by receiving 1202 acoustic signals at a first physical
microphone and a second physical microphone. In response to the
acoustic signals, a first microphone signal is output from the
first physical microphone and a second microphone signal is output
from the second physical microphone 1204. A first virtual
microphone is formed 1206 by generating a first combination of the
first microphone signal and the second microphone signal. A second
virtual microphone is formed 1208 by generating a second
combination of the first microphone signal and the second
microphone signal, and the second combination is different from the
first combination. The first virtual microphone and the second
virtual microphone are distinct virtual directional microphones
with substantially similar responses to noise and substantially
dissimilar responses to speech. The denoising 1200 generates 1210
output signals by combining signals from the first virtual
microphone and the second virtual microphone, and the output
signals include less acoustic noise than the acoustic signals.
[0105] FIG. 13 is a flow diagram for forming 1300 the DOMA, under
an embodiment. Formation 1300 of the DOMA includes forming 1302 a
physical microphone array including a first physical microphone and
a second physical microphone. The first physical microphone outputs
a first microphone signal and the second physical microphone
outputs a second microphone signal. A virtual microphone array is
formed 1304 comprising a first virtual microphone and a second
virtual microphone. The first virtual microphone comprises a first
combination of the first microphone signal and the second
microphone signal. The second virtual microphone comprises a second
combination of the first microphone signal and the second
microphone signal, and the second combination is different from the
first combination. The virtual microphone array including a single
null oriented in a direction toward a source of speech of a human
speaker.
[0106] The construction of VMs for the adaptive noise suppression
system of an embodiment includes substantially similar noise
response in V.sub.1 and V.sub.2. Substantially similar noise
response as used herein means that H.sub.1(z) is simple to model
and may not change much during speech, satisfying conditions R2 and
R4 described above and allowing strong denoising and minimized
bleedthrough.
[0107] The construction of VMs for the adaptive noise suppression
system of an embodiment includes relatively small speech response
for V.sub.2. The relatively small speech response for V.sub.2 means
that H.sub.2(z).apprxeq.0, which may satisfy conditions R3 and R5
described above.
[0108] The construction of VMs for the adaptive noise suppression
system of an embodiment further includes sufficient speech response
for V.sub.1 so that the cleaned speech may have significantly
higher SNR than the original speech captured by O.sub.1.
[0109] The description that follows assumes that the responses of
the omnidirectional microphones O.sub.1 and O.sub.2 to an identical
acoustic source have been normalized so that they have exactly the
same response (amplitude and phase) to that source. This can be
accomplished using standard microphone array methods (such as
frequency-based calibration) well known to those versed in the
art.
[0110] Referring to the condition that construction of VMs for the
adaptive noise suppression system of an embodiment includes
relatively small speech response for V.sub.2, it is seen that for
discrete systems V.sub.2(z) can be represented as:
V 2 ( z ) = O 2 ( z ) - z - .gamma. .beta. O 1 ( z ) ##EQU00007##
where ##EQU00007.2## .beta. = d 1 d 2 ##EQU00007.3## .gamma. = d 2
- d 1 c f s ( samples ) ##EQU00007.4## d 1 = d s 2 - 2 d s cos (
.theta. ) + d 0 2 ##EQU00007.5## d 2 = d s 2 - 2 d s cos ( .theta.
) + d 0 2 ##EQU00007.6##
The distances d.sub.1 and d.sub.2 are the distance from O.sub.1 and
O.sub.2 to the speech source (see FIG. 7), respectively, and
.gamma. is their difference divided by c, the speed of sound, and
multiplied by the sampling frequency f.sub.s. Thus, .gamma. is in
samples, but need not be an integer. For non-integer .gamma.,
fractional-delay filters (well known to those versed in the art)
may be used.
[0111] It is important to note that the .beta. above is not the
conventional .beta. used to denote the mixing of VMs in adaptive
beamforming; it is a physical variable of the system that depends
on the intra-microphone distance d.sub.0 (which is fixed) and the
distance d.sub.s and angle .theta., which can vary. As shown below,
for properly calibrated microphones, it is not necessary for the
system to be programmed with the exact .beta. of the array. Errors
of approximately 10-15% in the actual .beta. (i.e. the .beta. used
by the algorithm is not the .beta. of the physical array) have been
used with very little degradation in quality. The algorithmic value
of .beta. may be calculated and set for a particular user or may be
calculated adaptively during speech production when little or no
noise is present. However, adaptation during use is not required
for nominal performance.
[0112] FIG. 14 is a plot of linear response of virtual microphone
V.sub.2 with .beta.=0.8 to a 1 kHz speech source at a distance of
0.1 m, under an embodiment. The null in the linear response of
virtual microphone V.sub.2 to speech is located at 0 degrees, where
the speech is typically expected to be located. FIG. 15 is a plot
of linear response of virtual microphone V.sub.2 with .beta.=0.8 to
a 1 kHz noise source at a distance of 1.0 m, under an embodiment.
The linear response of V.sub.2 to noise is devoid of or includes no
null, meaning all noise sources are detected.
[0113] The above formulation for V.sub.2(z) has a null at the
speech location and may therefore exhibit minimal response to the
speech. This is shown in FIG. 14 for an array with d.sub.0=10.7 mm
and a speech source on the axis of the array (.theta.=0) at 10 cm
(.beta.=0.8). Note that the speech null at zero degrees is not
present for noise in the far field for the same microphone, as
shown in FIG. 15 with a noise source distance of approximately 1
meter. This allows the noise in front of the user to be detected so
that it can be removed. This differs from conventional systems that
can have difficulty removing noise in the direction of the mouth of
the user.
[0114] The V.sub.1(z) can be formulated using the general form for
V.sub.1(z):
V.sub.1(z)=.alpha..sub.AO.sub.1(z)z.sup.-d.sup.A-.alpha..sub.BO.sub.2(z)-
z.sup.-d.sup.B
Since
V.sub.2(z)=O.sub.2(z)-z.sup.-.gamma..beta.O.sub.1(z)
and, since for noise in the forward direction
O.sub.2N(z)=O.sub.1N(z)z.sup.-.gamma.,
then
V.sub.2N(z)=O.sub.1N(z)z.sup.-.gamma.-z.sup.-.gamma..beta.O.sub.1N(z)
V.sub.2N(z)=(1-.beta.)(O.sub.1N(z)z.sup.-.gamma.)
If this is then set to equal to V.sub.1(z) above, the result is
V.sub.IN(z)=.alpha..sub.AO.sub.1N(z)z.sup.-d.sup.A-.alpha..sub.BO.sub.1N-
(z)z.sup.-.gamma.z.sup.-d.sup.B=(1-.beta.)(O.sub.1N(z)z.sup.-.gamma.)
thus we may set
d.sub.A=.gamma.
d.sub.B=0
.alpha..sub.A=1
.alpha..sub.A=.beta.
to get
V.sub.1(z)=O.sub.1(z)z.sup.-.gamma.-.beta.O.sub.2(z)
The definitions for V.sub.1 and V.sub.2 above mean that for noise
H.sub.1(z) is:
H 1 ( z ) = V 1 ( z ) V 2 ( z ) = - .beta. O 2 ( z ) + O 1 ( z ) z
- .gamma. O 2 ( z ) - z - .gamma. .beta. O 1 ( z ) ##EQU00008##
which, if the amplitude noise responses are about the same, has the
form of an allpass filter. This has the advantage of being easily
and accurately modeled, especially in magnitude response,
satisfying R2.
[0115] This formulation allows the noise response to be as similar
as possible and the speech response to be proportional to
(1-.beta..sup.2). Since .beta. is the ratio of the distances from
O.sub.1 and O.sub.2 to the speech source, it is affected by the
size of the array and the distance from the array to the speech
source.
[0116] FIG. 16 is a plot of linear response of virtual microphone
V.sub.1 with .beta.=0.8 to a 1 kHz speech source at a distance of
0.1 m, under an embodiment. The linear response of virtual
microphone V.sub.1 to speech is devoid of or includes no null and
the response for speech is greater than that shown in FIG. 14.
[0117] FIG. 17 is a plot of linear response of virtual microphone
V.sub.1 with .beta.=0.8 to a 1 kHz noise source at a distance of
1.0 m, under an embodiment. The linear response of virtual
microphone V.sub.1 to noise is devoid of or includes no null and
the response is very similar V.sub.2 shown in FIG. 15.
[0118] FIG. 18 is a plot of linear response of virtual microphone
V.sub.1 with .beta.=0.8 to a 1 kHz noise source at a distance of
0.1 m for frequencies of 100, 500, 1000, 2000, 3000, and 4000 Hz,
under an embodiment. FIG. 19 is a plot showing comparison of
frequency responses for speech for the array of an embodiment and
for a conventional cardioid microphone.
[0119] The response of V.sub.1 to speech is shown in FIG. 16, and
the response to noise in FIG. 17. Note the difference in speech
response compared to V.sub.2 shown in FIG. 14 and the similarity of
noise response shown in FIG. 15. Also note that the orientation of
the speech response for V.sub.1 shown in FIG. 16 is completely
opposite the orientation of conventional systems, where the main
lobe of response is normally oriented toward the speech source. The
orientation of an embodiment, in which the main lobe of the speech
response of V.sub.1 is oriented away from the speech source, means
that the speech sensitivity of V.sub.1 is lower than a normal
directional microphone but is flat for all frequencies within
approximately +-30 degrees of the axis of the array, as shown in
FIG. 18. This flatness of response for speech means that no shaping
postfilter is needed to restore omnidirectional frequency response.
This does come at a price--as shown in FIG. 19, which shows the
speech response of V.sub.1 with .beta.=0.8 and the speech response
of a cardioid microphone. The speech response of V.sub.1 is
approximately 0 to .about.13 dB less than a normal directional
microphone between approximately 500 and 7500 Hz and approximately
0 to 10+dB greater than a directional microphone below
approximately 500 Hz and above 7500 Hz for a sampling frequency of
approximately 16000 Hz. However, the superior noise suppression
made possible using this system more than compensates for the
initially poorer SNR.
[0120] It should be noted that FIGS. 14-17 assume the speech is
located at approximately 0 degrees and approximately 10 cm,
.beta.=0.8, and the noise at all angles is located approximately
1.0 meter away from the midpoint of the array. Generally, the noise
distance is not required to be 1 m or more, but the denoising is
the best for those distances. For distances less than approximately
1 m, denoising may not be as effective due to the greater
dissimilarity in the noise responses of V.sub.1 and V.sub.2. This
has not proven to be an impediment in practical use--in fact, it
can be seen as a feature. Any "noise" source that is .about.10 cm
away from the earpiece is likely to be desired to be captured and
transmitted.
[0121] The speech null of V.sub.2 means that the VAD signal is no
longer a critical component. The VAD's purpose was to ensure that
the system would not train on speech and then subsequently remove
it, resulting in speech distortion. If, however, V.sub.2 contains
no speech, the adaptive system cannot train on the speech and
cannot remove it. As a result, the system can denoise all the time
without fear of devoicing, and the resulting clean audio can then
be used to generate a VAD signal for use in subsequent
single-channel noise suppression algorithms such as spectral
subtraction. In addition, constraints on the absolute value of
H.sub.1(z) (i.e. restricting it to absolute values less than two)
can keep the system from fully training on speech even if it is
detected. In reality, though, speech can be present due to a
mis-located V.sub.2 null and/or echoes or other phenomena, and a
VAD sensor or other acoustic-only VAD is recommended to minimize
speech distortion.
[0122] Depending on the application, .beta. and .gamma. may be
fixed in the noise suppression algorithm or they can be estimated
when the algorithm indicates that speech production is taking place
in the presence of little or no noise. In either case, there may be
an error in the estimate of the actual .beta. and .gamma. of the
system. The following description examines these errors and their
effect on the performance of the system. As above, "good
performance" of the system indicates that there is sufficient
denoising and minimal devoicing.
[0123] The effect of an incorrect .beta. and .gamma. on the
response of V.sub.1 and V.sub.2 can be seen by examining the
definitions above:
V.sub.1(z)=O.sub.1(z)z.sup.-.gamma..sup.T-.beta..sub.TO.sub.2(z)
V.sub.2(z)=O.sub.2(z)z.sup.-.gamma..sup.T.beta..sub.TO.sub.1(z)
where .beta..sub.T and .gamma..sub.T denote the theoretical
estimates of .beta. and .gamma. used in the noise suppression
algorithm. In reality, the speech response of O.sub.2 is
O.sub.2S(z)=.beta..sub.RO.sub.1S(z)z.sup.-.gamma..sup.R
where .beta..sub.R and .gamma..sub.R denote the real .beta. and
.gamma. of the physical system. The differences between the
theoretical and actual values of .beta. and .gamma. can be due to
mis-location of the speech source (it is not where it is assumed to
be) and/or a change in the air temperature (which changes the speed
of sound). Inserting the actual response of O.sub.2 for speech into
the above equations for V.sub.1 and V.sub.2 yields
V.sub.1S(z)=O.sub.1S(z)[z.sup.-.gamma..sup.T-.beta..sub.T.beta..sub.Rz.s-
up.-.gamma..sup.R]
V.sub.2S(z)=O.sub.1S(z)[.beta..sub.Rz.sup.-.gamma..sup.R-.beta..sub.Tz.s-
up.-.gamma..sup.T]
If the difference in phase is represented by
V.sub.R=Y.sub.T+Y.sub.D
And the difference in amplitude as
.beta..sub.R=B.beta..sub.T
then
V.sub.1S(z)=O.sub.1S(z)z.sup.-.gamma..sup.T[1-B.beta..sub.T.sup.2z.sup.--
.gamma..sup.D]
V.sub.2S(z)=.beta..sub.TO.sub.1S(z)z.sup.-.gamma..sup.T[Bz.sup.-.gamma..-
sup.D-1]. Eq. 5
[0124] The speech cancellation in V.sub.2 (which directly affects
the degree of devoicing) and the speech response of V.sub.1 may be
dependent on both B and D. An examination of the case where D=0
follows. FIG. 20 is a plot showing speech response for V.sub.1
(top, dashed) and V.sub.2 (bottom, solid) versus B with d.sub.s
assumed to be 0.1 m, under an embodiment. This plot shows the
spatial null in V.sub.2 to be relatively broad. FIG. 21 is a plot
showing a ratio of V.sub.1/V.sub.2 speech responses shown in FIG.
15 versus B, under an embodiment. The ratio of V.sub.1/V.sub.2 is
above 10 dB for all 0.8<B<1.1, and this means that the
physical 13 of the system need not be exactly modeled for good
performance. FIG. 22 is a plot of B versus actual d.sub.s assuming
that d.sub.s=10 cm and theta=0, under an embodiment. FIG. 23 is a
plot of B versus theta with d.sub.s=10 cm and assuming d.sub.s=10
cm, under an embodiment.
[0125] In FIG. 20, the speech response for V.sub.1 (upper, dashed)
and V.sub.2 (lower, solid) compared to O.sub.1 is shown versus B
when d.sub.s is thought to be approximately 10 cm and 0=0. When
B=1, the speech is absent from V.sub.2. In FIG. 21, the ratio of
the speech responses in FIG. 15 is shown. When 0.8<B<1.1, the
V.sub.1/V.sub.2 ratio is above approximately 10 dB--enough for good
performance. Clearly, if D=0, B can vary significantly without
adversely affecting the performance of the system. Again, this
assumes that calibration of the microphones so that both their
amplitude and phase response is the same for an identical source
has been performed.
[0126] The B factor can be non-unity for a variety of reasons.
Either the distance to the speech source or the relative
orientation of the array axis and the speech source or both can be
different than expected. If both distance and angle mismatches are
included for B, then
B = .beta. R .beta. T d SR 2 - 2 d SR d 0 cos ( .theta. R ) + d 0 2
d SR 2 + 2 d SR d 0 cos ( .theta. R ) + d 0 2 d ST 2 + 2 d ST d 0
cos ( .theta. T ) + d 0 2 d ST 2 - 2 d ST d 0 cos ( .theta. T ) + d
0 2 ##EQU00009##
where again the T subscripts indicate the theorized values and R
the actual values. In FIG. 22, the factor B is plotted with respect
to the actual d.sub.s with the assumption that d.sub.s=10 cm and
.theta.=0. So, if the speech source in on-axis of the array, the
actual distance can vary from approximately 5 cm to 18 cm without
significantly affecting performance--a significant amount.
Similarly, FIG. 23 shows what happens if the speech source is
located at a distance of approximately 10 cm but not on the axis of
the array. In this case, the angle can vary up to approximately
+-55 degrees and still result in a B less than 1.1, assuring good
performance. This is a significant amount of allowable angular
deviation. If there is both angular and distance errors, the
equation above may be used to determine if the deviations may
result in adequate performance. Of course, if the value for
.beta..sub.T is allowed to update during speech, essentially
tracking the speech source, then B can be kept near unity for
almost all configurations.
[0127] An examination follows of the case where B is unity but D is
nonzero. This can happen if the speech source is not where it is
thought to be or if the speed of sound is different from what it is
believed to be. From Equation 5 above, it can be sees that the
factor that weakens the speech null in V.sub.2 for speech is
N(z)=Bz.sup.-.gamma..sup.D-1
or in the continuous s domain
N(s)=Be.sup.-D.sup.s-1.
[0128] Since .gamma. is the time difference between arrival of
speech at V.sub.1 compared to V.sub.2, it can be errors in
estimation of the angular location of the speech source with
respect to the axis of the array and/or by temperature changes.
Examining the temperature sensitivity, the speed of sound varies
with temperature as
c=331.3+(0.606 T)m/s
where T is degrees Celsius. As the temperature decreases, the speed
of sound also decreases. Setting 20 C as a design temperature and a
maximum expected temperature range to -40 C to +60 C (-40 F to 140
F). The design speed of sound at 20 C is 343 m/s and the slowest
speed of sound may be 307 m/s at -40 C with the fastest speed of
sound 362 m/s at 60 C. Set the array length (2d.sub.0) to be 21 mm.
For speech sources on the axis of the array, the difference in
travel time for the largest change in the speed of sound is
.gradient. t MAX = d c 1 - d c 2 = 0.021 m ( 1 343 m / s - 1 307 m
/ s ) = - 7.2 .times. 10 - 6 sec ##EQU00010##
or approximately 7 microseconds. The response for N(s) given B=1
and D=7.2 .mu.sec is shown in FIG. 24. FIG. 24 is a plot of
amplitude (top) and phase (bottom) response of N(s) with B=1 and
D=-7.2 .mu.sec, under an embodiment. The resulting phase difference
clearly affects high frequencies more than low. The amplitude
response is less than approximately -10 dB for all frequencies less
than 7 kHz and is about -9 dB at 8 kHz. Therefore, assuming B=1,
this system would likely perform well at frequencies up to
approximately 8 kHz. This means that a properly compensated system
would work well even up to 8 kHz in an exceptionally wide (e.g.,
-40 C to 80 C) temperature range. Note that the phase mismatch due
to the delay estimation error causes N(s) to be much larger at high
frequencies compared to low.
[0129] If B is not unity, the robustness of the system is reduced
since the effect from non-unity B is cumulative with that of
non-zero D. FIG. 25 shows the amplitude and phase response for
B=1.2 and D=7.2 .mu.sec. FIG. 25 is a plot of amplitude (top) and
phase (bottom) response of N(s) with B=1.2 and D=-7.2 .mu.sec,
under an embodiment. Non-unity B affects the entire frequency
range. Now N(s) is below approximately -10 dB for frequencies less
than approximately 5 kHz and the response at low frequencies is
much larger. Such a system would still perform well below 5 kHz and
would only suffer from slightly elevated devoicing for frequencies
above 5 kHz. For ultimate performance, a temperature sensor may be
integrated into the system to allow the algorithm to adjust
.gamma..sub.T as the temperature varies.
[0130] Another way in which D can be non-zero is when the speech
source is not where it is believed to be--specifically, the angle
from the axis of the array to the speech source is incorrect. The
distance to the source may be incorrect as well, but that
introduces an error in B, not D.
[0131] Referring to FIG. 7, it can be seen that for two speech
sources (each with their own d.sub.s and .theta.) that the time
difference between the arrival of the speech at O.sub.1 and the
arrival at O.sub.2 is
.DELTA. t = 1 c ( d 12 - d 11 - d 22 + d 21 ) ##EQU00011## where
##EQU00011.2## d 11 = d S 1 2 - 2 d S 1 d 0 cos ( .theta. 1 ) + d 0
2 ##EQU00011.3## d 12 = d S 1 2 + 2 d S 1 d 0 cos ( .theta. 1 ) + d
0 2 ##EQU00011.4## d 21 = d S 2 2 - 2 d S 2 d 0 cos ( .theta. 2 ) +
d 0 2 ##EQU00011.5## d 22 = d S 2 2 + 2 d S 2 d 0 cos ( .theta. 2 )
+ d 0 2 ##EQU00011.6##
[0132] The V.sub.2 speech cancellation response for .theta..sub.1=0
degrees and .theta..sub.2=30 degrees and assuming that B=1 is shown
in FIG. 26. FIG. 26 is a plot of amplitude (top) and phase (bottom)
response of the effect on the speech cancellation in V.sub.2 due to
a mistake in the location of the speech source with q1=0 degrees
and q2=30 degrees, under an embodiment. Note that the cancellation
is still below -10 dB for frequencies below 6 kHz. The cancellation
is still below approximately -10 dB for frequencies below
approximately 6 kHz, so an error of this type may not significantly
affect the performance of the system. However, if .theta..sub.2 is
increased to approximately 45 degrees, as shown in FIG. 27, the
cancellation is below approximately -10 dB for frequencies below
approximately 2.8 kHz. FIG. 27 is a plot of amplitude (top) and
phase (bottom) response of the effect on the speech cancellation in
V.sub.2 due to a mistake in the location of the speech source with
q1=0 degrees and q2=45 degrees, under an embodiment. Now the
cancellation is below -10 dB for frequencies below about 2.8 kHz
and a reduction in performance is expected. The poor V.sub.2 speech
cancellation above approximately 4 kHz may result in significant
devoicing for those frequencies.
[0133] The description above has assumed that the microphones
O.sub.1 and O.sub.2 were calibrated so that their response to a
source located the same distance away was identical for both
amplitude and phase. This is not always feasible, so a more
practical calibration procedure is presented below. It is not as
accurate, but is much simpler to implement. Begin by defining a
filter .alpha.(z) such that:
O.sub.1C(z)=.alpha.(z)O.sub.2C(z)
where the "C" subscript indicates the use of a known calibration
source. The simplest one to use is the speech of the user. Then
O.sub.1S(z)=.alpha.(z)O.sub.2C(z)
The microphone definitions are now:
V.sub.1(z)=O.sub.1(z)z.sup.-.gamma.-.beta.(z).alpha.(z)O.sub.2(z)
V.sub.2(z)=.alpha.(z)O.sub.2(z)-z.sup.-.gamma..beta.(z)O.sub.1(z)
[0134] The .beta. of the system should be fixed and as close to the
real value as possible. In practice, the system is not sensitive to
changes in .beta. and errors of approximately +-5% are easily
tolerated. During times when the user is producing speech but there
is little or no noise, the system can train .alpha.(z) to remove as
much speech as possible. This is accomplished by: [0135] 1.
Construct an adaptive system as shown in FIG. 6 with
.beta.O.sub.1S(z)z.sup.-.gamma. in the "MIC1" position, O.sub.2S(z)
in the "MIC2" position, and .alpha.(z) in the H.sub.1(z) position.
[0136] 2. During speech, adapt .alpha.(z) to minimize the residual
of the system. [0137] 3. Construct V.sub.1(z) and V.sub.2(z) as
above.
[0138] A simple adaptive filter can be used for .alpha.(z) so that
the relationship between the microphones is well modeled. The
system of an embodiment trains when speech is being produced by the
user. A sensor like the SSM is invaluable in determining when
speech is being produced in the absence of noise. If the speech
source is fixed in position and may not vary significantly during
use (such as when the array is on an earpiece), the adaptation
should be infrequent and slow to update in order to minimize any
errors introduced by noise present during training.
[0139] The above formulation works very well because the noise
(far-field) responses of V.sub.1 and V.sub.2 are very similar while
the speech (near-field) responses are very different. However, the
formulations for V.sub.1 and V.sub.2 can be varied and still result
in good performance of the system as a whole. If the definitions
for V.sub.1 and V.sub.2 are taken from above and new variables B1
and B2 are inserted, the result is:
V.sub.1(z)=O.sub.1(z)z.sup.-.gamma..sup.T-B.sub.1.beta..sub.TO.sub.2(z)
V.sub.2(z)=O.sub.2(z)-z.sup.-.gamma..sup.TB.sub.2.beta..sub.TO.sub.1(z)
where B1 and B2 are both positive numbers or zero. If B1 and B2 are
set equal to unity, the optimal system results as described above.
If B1 is allowed to vary from unity, the response of V.sub.1 is
affected. An examination of the case where B2 is left at 1 and B1
is decreased follows. As B1 drops to approximately zero, V.sub.1
becomes less and less directional, until it becomes a simple
omnidirectional microphone when B1=0. Since B2=1, a speech null
remains in V.sub.2, so very different speech responses remain for
V.sub.1 and V.sub.2. However, the noise responses are much less
similar, so denoising may not be as effective. Practically, though,
the system still performs well. B1 can also be increased from unity
and once again the system may denoise well, just not as well as
with B1=1.
[0140] If B2 is allowed to vary, the speech null in V.sub.2 is
affected. As long as the speech null is still sufficiently deep,
the system may still perform well. Practically values down to
approximately B2=0.6 have shown sufficient performance, but it is
recommended to set B2 close to unity for optimal performance.
[0141] Similarly, variables E and A may be introduced so that:
V.sub.1(z)=(.di-elect
cons.-.beta.)O.sub.2N(z)+(1+.DELTA.)O.sub.1N(z)z.sup.-.gamma.
V.sub.2(z)=(1+.DELTA.)O.sub.2N(z)+(.di-elect
cons.-.beta.)O.sub.1N(z)z.sup.-.gamma.
This formulation also allows the virtual microphone responses to be
varied but retains the all-pass characteristic of H.sub.1(z).
[0142] In conclusion, the system is flexible enough to operate well
at a variety of B1 values, but B2 values should be close to unity
to limit devoicing for best performance.
[0143] Experimental results for a 2d.sub.0=19 mm array using a
linear .beta. of 0.83 and B1=B2=1 on a Bruel and Kjaer Head and
Torso Simulator (HATS) in very loud (.about.85 dBA) music/speech
noise environment are shown in FIG. 28. The alternate microphone
calibration technique discussed above was used to calibrate the
microphones. The noise has been reduced by about 25 dB and the
speech hardly affected, with no noticeable distortion. Clearly the
technique significantly increases the SNR of the original speech,
far outperforming conventional noise suppression techniques.
[0144] The DOMA can be a component of a single system, multiple
systems, and/or geographically separate systems. The DOMA can also
be a subcomponent or subsystem of a single system, multiple
systems, and/or geographically separate systems. The DOMA can be
coupled to one or more other components (not shown) of a host
system or a system coupled to the host system.
[0145] One or more components of the DOMA and/or a corresponding
system or application to which the DOMA is coupled or connected
includes and/or runs under and/or in association with a processing
system. The processing system includes any collection of
processor-based devices or computing devices operating together, or
components of processing systems or devices, as is known in the
art. For example, the processing system can include one or more of
a portable computer, portable communication device operating in a
communication network, and/or a network server. The portable
computer can be any of a number and/or combination of devices
selected from among personal computers, cellular telephones,
personal digital assistants, portable computing devices, and
portable communication devices, but is not so limited. The
processing system can include components within a larger computer
system.
[0146] An acoustic vibration sensor, also referred to as a speech
sensing device, is described below. The acoustic vibration sensor
is similar to a microphone in that it captures speech information
from the head area of a human talker or talker in noisy
environments. Previous solutions to this problem have either been
vulnerable to noise, physically too large for certain applications,
or cost prohibitive. In contrast, the acoustic vibration sensor
described herein accurately detects and captures speech vibrations
in the presence of substantial airborne acoustic noise, yet within
a smaller and cheaper physical package. The noise-immune speech
information provided by the acoustic vibration sensor can
subsequently be used in downstream speech processing applications
(speech enhancement and noise suppression, speech encoding, speech
recognition, talker verification, etc.) to improve the performance
of those applications.
[0147] FIG. 29 is a cross section view of an example of an acoustic
vibration sensor 2900, also referred to herein as the sensor 2900,
under an embodiment. FIG. 30A is an exploded view of an acoustic
vibration sensor 2900, under the embodiment of FIG. 29. FIG. 30B is
perspective view of an acoustic vibration sensor 2900, under the
embodiment of FIG. 29. The sensor 2900 includes an enclosure 2902
having a first port 2904 on a first side and at least one second
port 2906 on a second side of the enclosure 2902. A diaphragm 2908,
also referred to as a sensing diaphragm 2908, is positioned between
the first and second ports. A coupler 2910, also referred to as the
shroud 2910 or cap 2910, forms an acoustic seal around the
enclosure 2902 so that the first port 2904 and the side of the
diaphragm facing the first port 2904 are isolated from the airborne
acoustic environment of the human talker. The coupler 2910 of an
embodiment is contiguous, but is not so limited. The second port
2906 couples a second side of the diaphragm to the external
environment.
[0148] The sensor also includes electret material 2920 and the
associated components and electronics coupled to receive acoustic
signals from the talker via the coupler 2910 and the diaphragm 2908
and convert the acoustic signals to electrical signals
representative of human speech. Electrical contacts 2930 provide
the electrical signals as an output. Alternative embodiments can
use any type/combination of materials and/or electronics to convert
the acoustic signals to electrical signals representative of human
speech and output the electrical signals.
[0149] The coupler 2910 of an embodiment is formed using materials
having acoustic impedances matched to the impedance of human skin
(characteristic acoustic impedance of skin is approximately
1.5.times.10.sup.6 Paxs/m). The coupler 2910 therefore, is formed
using a material that includes at least one of silicone gel,
dielectric gel, thermoplastic elastomers (TPE), and rubber
compounds, but is not so limited. As an example, the coupler 2910
of an embodiment is formed using Kraiburg TPE products. As another
example, the coupler 2910 of an embodiment is formed using
Sylgard.RTM. Silicone products.
[0150] The coupler 2910 of an embodiment includes a contact device
2912 that includes, for example, a nipple or protrusion that
protrudes from either or both sides of the coupler 2910. In
operation, a contact device 2912 that protrudes from both sides of
the coupler 2910 includes one side of the contact device 2912 that
is in contact with the skin surface of the talker and another side
of the contact device 2912 that is in contact with the diaphragm,
but the embodiment is not so limited. The coupler 2910 and the
contact device 2912 can be formed from the same or different
materials.
[0151] The coupler 2910 transfers acoustic energy efficiently from
skin/flesh of a talker to the diaphragm, and seals the diaphragm
from ambient airborne acoustic signals. Consequently, the coupler
2910 with the contact device 2912 efficiently transfers acoustic
signals directly from the talker's body (speech vibrations) to the
diaphragm while isolating the diaphragm from acoustic signals in
the airborne environment of the talker (characteristic acoustic
impedance of air is approximately 415 Paxs/m). The diaphragm is
isolated from acoustic signals in the airborne environment of the
talker by the coupler 2910 because the coupler 2910 prevents the
signals from reaching the diaphragm, thereby reflecting and/or
dissipating much of the energy of the acoustic signals in the
airborne environment. Consequently, the sensor 2900 responds
primarily to acoustic energy transferred from the skin of the
talker, not air. When placed against the head of the talker, the
sensor 2900 picks up speech-induced acoustic signals on the surface
of the skin while airborne acoustic noise signals are largely
rejected, thereby increasing the signal-to-noise ratio and
providing a very reliable source of speech information.
[0152] Performance of the sensor 2900 is enhanced through the use
of the seal provided between the diaphragm and the airborne
environment of the talker. The seal is provided by the coupler
2910. A modified gradient microphone is used in an embodiment
because it has pressure ports on both ends. Thus, when the first
port 2904 is sealed by the coupler 2910, the second port 2906
provides a vent for air movement through the sensor 2900.
[0153] FIG. 31 is a schematic diagram of a coupler 2910 of an
acoustic vibration sensor, under the embodiment of FIG. 29. The
dimensions shown are in millimeters and are intended to serve as an
example for one embodiment. Alternative embodiments of the coupler
can have different configurations and/or dimensions. The dimensions
of the coupler 2910 show that the acoustic vibration sensor 2900 is
small in that the sensor 2900 of an embodiment is approximately the
same size as typical microphone capsules found in mobile
communication devices. This small form factor allows for use of the
sensor 2910 in highly mobile miniaturized applications, where some
example applications include at least one of cellular telephones,
satellite telephones, portable telephones, wireline telephones,
Internet telephones, wireless transceivers, wireless communication
radios, personal digital assistants (PDAs), personal computers
(PCs), headset devices, head-worn devices, and earpieces.
[0154] The acoustic vibration sensor provides very accurate Voice
Activity Detection (VAD) in high noise environments, where high
noise environments include airborne acoustic environments in which
the noise amplitude is as large if not larger than the speech
amplitude as would be measured by conventional omnidirectional
microphones. Accurate VAD information provides significant
performance and efficiency benefits in a number of important speech
processing applications including but not limited to: noise
suppression algorithms such as the Pathfinder algorithm available
from AliphCom of San Francisco, Calif. and described in the Related
Applications; speech compression algorithms such as the Enhanced
Variable Rate Coder (EVRC) deployed in many commercial systems; and
speech recognition systems.
[0155] In addition to providing signals having an improved
signal-to-noise ratio, the acoustic vibration sensor uses minimal
power to operate (on the order of 200 micro Amps, for example). In
contrast to alternative solutions that require power, filtering,
and/or significant amplification, the acoustic vibration sensor
uses a standard microphone interface to connect with signal
processing devices. The use of the standard microphone interface
avoids the additional expense and size of interface circuitry in a
host device and supports for of the sensor in highly mobile
applications where power usage is an issue.
[0156] FIG. 32 is an exploded view of an acoustic vibration sensor
3200, under an alternative embodiment. The sensor 3200 includes an
enclosure 3202 having a first port 3204 on a first side and at
least one second port (not shown) on a second side of the enclosure
3202. A diaphragm 3208 is positioned between the first and second
ports. A layer of silicone gel 3209 or other similar substance is
formed in contact with at least a portion of the diaphragm 3208. A
coupler 3210 or shroud 3210 is formed around the enclosure 3202 and
the silicon gel 3209 where a portion of the coupler 3210 is in
contact with the silicon gel 3209. The coupler 3210 and silicon gel
3209 in combination form an acoustic seal around the enclosure 3202
so that the first port 3204 and the side of the diaphragm facing
the first port 3204 are isolated from the acoustic environment of
the human talker. The second port couples a second side of the
diaphragm to the acoustic environment.
[0157] As described above, the sensor includes additional
electronic materials as appropriate that couple to receive acoustic
signals from the talker via the coupler 3210, the silicon gel 3209,
and the diaphragm 3208 and convert the acoustic signals to
electrical signals representative of human speech. Alternative
embodiments can use any type/combination of materials and/or
electronics to convert the acoustic signals to electrical signals
representative of human speech.
[0158] The coupler 3210 and/or gel 3209 of an embodiment are formed
using materials having impedances matched to the impedance of human
skin. As such, the coupler 3210 is formed using a material that
includes at least one of silicone gel, dielectric gel,
thermoplastic elastomers (TPE), and rubber compounds, but is not so
limited. The coupler 3210 transfers acoustic energy efficiently
from skin/flesh of a talker to the diaphragm, and seals the
diaphragm from ambient airborne acoustic signals. Consequently, the
coupler 3210 efficiently transfers acoustic signals directly from
the talker's body (speech vibrations) to the diaphragm while
isolating the diaphragm from acoustic signals in the airborne
environment of the talker. The diaphragm is isolated from acoustic
signals in the airborne environment of the talker by the silicon
gel 3209/coupler 3210 because the silicon gel 3209/coupler 3210
prevents the signals from reaching the diaphragm, thereby
reflecting and/or dissipating much of the energy of the acoustic
signals in the airborne environment. Consequently, the sensor 3200
responds primarily to acoustic energy transferred from the skin of
the talker, not air. When placed again the head of the talker, the
sensor 3200 picks up speech-induced acoustic signals on the surface
of the skin while airborne acoustic noise signals are largely
rejected, thereby increasing the signal-to-noise ratio and
providing a very reliable source of speech information.
[0159] There are many locations outside the ear from which the
acoustic vibration sensor can detect skin vibrations associated
with the production of speech. The sensor can be mounted in a
device, handset, or earpiece in any manner, as long as reliable
skin contact is used to detect the skin-borne vibrations associated
with the production of speech. FIG. 33 shows representative areas
of sensitivity 3300-3328 on the human head appropriate for
placement of an example of an acoustic vibration sensor 2900/3200,
under an embodiment. The areas of sensitivity 3300-3320 include
numerous locations 3302-3308 in an area behind the ear 3300, at
least one location 3312 in an area in front of the ear 3310, and in
numerous locations 3322-3328 in the ear canal area 3320. The areas
of sensitivity 3300-3328 are the same for both sides of the human
head. These representative areas of sensitivity 3300-3320 are
provided as examples and do not limit the embodiments described
herein to use in these areas.
[0160] FIG. 34 is a generic headset device 3400 that includes an
acoustic vibration sensor 2900/3200 placed at any of a number of
locations 3402-3408, under an embodiment. Generally, placement of
the acoustic vibration sensor 2900/3200 can be on any part of the
device 3400 that corresponds to the areas of sensitivity 3300-3328
(FIG. 33) on the human head. While a headset device is shown as an
example, any number of communication devices known in the art can
carry and/or couple to an acoustic vibration sensor 2900/3200.
[0161] FIG. 35 is a diagram of a manufacturing method 3500 for an
acoustic vibration sensor, under an embodiment. Operation begins
with, for example, a uni-directional microphone 3520, at block
3502. Silicon gel 3522 is formed over/on the diaphragm (not shown)
and the associated port, at block 3504. A material 3524, for
example polyurethane film, is formed or placed over the microphone
3520/silicone gel 3522 combination, at block 3506, to form a
coupler or shroud. A snug fit collar or other device is placed on
the microphone to secure the material of the coupler during curing,
at block 3508.
[0162] Note that the silicon gel (block 3502) is an optional
component that depends on the embodiment of the sensor being
manufactured, as described above. Consequently, the manufacture of
an acoustic vibration sensor 2900 that includes a contact device
2912 (referring to FIG. 29) may not include the formation of
silicon gel 3522 over/on the diaphragm. Further, the coupler formed
over the microphone for this sensor 2900 may include the contact
device 2912 or formation of the contact device 2912.
[0163] The systems and methods described herein include and/or run
under and/or in association with a processing system. The
processing system includes any collection of processor-based
devices or computing devices operating together, or components of
processing systems or devices, as is known in the art. For example,
the processing system can include one or more of a portable
computer, portable communication device operating in a
communication network, and/or a network server. The portable
computer can be any of a number and/or combination of devices
selected from among personal computers, cellular telephones,
personal digital assistants, portable computing devices, and
portable communication devices, but is not so limited. The
processing system can include components within a larger computer
system.
[0164] The processing system of an embodiment includes at least one
processor and at least one memory device or subsystem. The
processing system can also include or be coupled to at least one
database. The term "processor" as generally used herein refers to
any logic processing unit, such as one or more central processing
units (CPUs), digital signal processors (DSPs),
application-specific integrated circuits (ASIC), etc. The processor
and memory can be monolithically integrated onto a single chip,
distributed among a number of chips or components of a host system,
and/or provided by some combination of algorithms. The methods
described herein can be implemented in one or more of software
algorithm(s), programs, firmware, hardware, components, circuitry,
in any combination.
[0165] System components embodying the systems and methods
described herein can be located together or in separate locations.
Consequently, system components embodying the systems and methods
described herein can be components of a single system, multiple
systems, and/or geographically separate systems. These components
can also be subcomponents or subsystems of a single system,
multiple systems, and/or geographically separate systems. These
components can be coupled to one or more other components of a host
system or a system coupled to the host system.
[0166] Communication paths couple the system components and include
any medium for communicating or transferring files among the
components. The communication paths include wireless connections,
wired connections, and hybrid wireless/wired connections. The
communication paths also include couplings or connections to
networks including local area networks (MANs), wide area networks
(WANs), proprietary networks, interoffice or backend networks, and
the Internet. Furthermore, the communication paths include
removable fixed mediums like floppy disks, hard disk drives, and
CD-ROM disks, as well as flash RAM, Universal Serial Bus (USB)
connections, RS-232 connections, telephone lines, buses, and
electronic mail messages.
[0167] Unless the context clearly indicates otherwise, throughout
the description, the words "comprise," "comprising," and the like
are to be construed in an inclusive sense as opposed to an
exclusive or exhaustive sense; that is to say, in a sense of
"including, but not limited to." Additionally, the words "herein,"
"hereunder," "above," "below," and words of similar import refer to
this application as a whole and not to any particular portions of
this application. When the word "or" is used in reference to a list
of two or more items, that word covers all of the following
interpretations of the word: any of the items in the list, all of
the items in the list and any combination of the items in the
list.
[0168] The above description of embodiments is not intended to be
exhaustive or to limit the systems and methods described to the
precise form disclosed. While specific embodiments and examples are
described herein for illustrative purposes, various equivalent
modifications are possible within the scope of other systems and
methods, as those skilled in the relevant art may recognize. The
teachings provided herein can be applied to other processing
systems and methods, not only for the systems and methods described
above.
[0169] The elements and acts of the various embodiments described
above can be combined to provide further embodiments. These and
other changes can be made to the embodiments in light of the above
detailed description.
* * * * *
References