U.S. patent application number 10/236042 was filed with the patent office on 2003-12-04 for method of determining uncertainty associated with acoustic distortion-based noise reduction.
Invention is credited to Acero, Alejandro, Deng, Li, Droppo, James G..
Application Number | 20030225577 10/236042 |
Document ID | / |
Family ID | 29587546 |
Filed Date | 2003-12-04 |
United States Patent
Application |
20030225577 |
Kind Code |
A1 |
Deng, Li ; et al. |
December 4, 2003 |
Method of determining uncertainty associated with acoustic
distortion-based noise reduction
Abstract
A method and apparatus are provided for determining uncertainty
in noise reduction based on a parametric model of speech
distortion. The method is first used to reduce noise in a noisy
signal. In particular, noise is reduced from a representation of a
portion of a noisy signal to produce a representation of a cleaned
signal by utilizing an acoustic environment model. The uncertainty
associated with the noise reduction process is then computed. In
one embodiment, the uncertainty of the noise reduction process is
used, in conjunction with the noise-reduced signal, to decode a
pattern state.
Inventors: |
Deng, Li; (Redmond, WA)
; Acero, Alejandro; (Bellevue, WA) ; Droppo, James
G.; (Duvall, WA) |
Correspondence
Address: |
WESTMAN, CHAMPLIN & KELLY
A PROFESSIONAL ASSOCIATION
INTERNATIONAL CENTRE - SUITE 1600
900 SECOND AVENUE SOUTH
MINNEAPOLIS
MN
55402-3319
US
|
Family ID: |
29587546 |
Appl. No.: |
10/236042 |
Filed: |
September 5, 2002 |
Related U.S. Patent Documents
|
|
|
|
|
|
Application
Number |
Filing Date |
Patent Number |
|
|
10236042 |
Sep 5, 2002 |
|
|
|
10152127 |
May 20, 2002 |
|
|
|
10236042 |
Sep 5, 2002 |
|
|
|
10152143 |
May 20, 2002 |
|
|
|
Current U.S.
Class: |
704/226 ;
704/E15.039; 704/E21.004 |
Current CPC
Class: |
G10L 21/0208 20130101;
G10L 15/20 20130101 |
Class at
Publication: |
704/226 |
International
Class: |
G10L 021/02 |
Claims
What is claimed is:
1. A method of reducing noise in a noisy signal, the method
comprising: reducing noise in a representation of a portion of the
noisy signal to produce a representation of a portion of a
noise-reduced signal using an acoustic-environment model; and
identifying an uncertainty associated with reducing the noise.
2. The method of claim 1 further comprising using the uncertainty
to adjust a probability distribution to form a modified probability
distribution; and using the representation of a portion of the
noise-reduced signal and the modified probability distribution to
decode a pattern state.
3. The method of claim 1 wherein the representation of a portion of
a noise-reduced signal comprises a representation of delta features
of the noise-reduced signal.
4. The method of claim 3 wherein identifying an uncertainty
comprises identifying an uncertainty associated with a delta
feature of the noise-reduced signal.
5. The method of claim 1 wherein the representation of a portion of
a noise-reduced signal comprises a representation of acceleration
features of the noise-reduced signal.
6. The method of claim 5 wherein identifying an uncertainty
comprises identifying an uncertainty associated with an
acceleration feature of the noise-reduced signal.
7. The method of claim 1 wherein reducing noise comprises using a
prior model of dynamic aspects of clean signals.
8. A computer-readable medium having computer-executable
instructions for performing steps comprising: converting a frame of
a noisy signal into a feature vector comprising at least two
components; reducing noise in a component of the feature vector for
the noisy signal to produce a component of a feature vector for a
cleaned signal by utilizing an acoustic distortion model; and
identifying an uncertainty associated with reducing the noise from
the component.
9. The computer-readable medium of claim 8 wherein the
computer-executable instructions perform further steps comprising:
determining a probability component of a probability for a phonetic
state based in part on the component for the cleaned signal and the
uncertainty associated with reducing the noise from the component;
and using the probability component to determine the probability of
the phonetic state regardless of the value of the uncertainty.
10. The computer-readable medium of claim 9 wherein determining a
probability component comprises defining a probability distribution
based in part on the uncertainty.
11. The computer-readable medium of claim 10 wherein defining a
probability distribution comprises adding the uncertainty to a
variance of a probability distribution.
12. The computer-readable medium of claim 8 wherein the
computer-executable instructions perform a further step comprising
determining a delta feature of the cleaned signal.
13. The computer-readable medium of claim 12 wherein the
computer-executable instructions perform a further step comprising
determining an uncertainty of the delta feature of the cleaned
signal.
14. The computer-readable medium of claim 8 wherein the
computer-executable instructions perform a further step comprising
determining an acceleration feature of the cleaned signal.
15. The computer-readable medium of claim 14 wherein the
computer-executable instructions perform a further step comprising
determining an uncertainty of the acceleration feature of the
cleaned signal.
16. The computer-readable medium of claim 8 wherein reducing noise
from a component further comprises utilizing a prior model of
dynamic aspects of a clean signal.
Description
REFERENCE TO RELATED APPLICATIONS
[0001] This application is a continuation-in-part of U.S. patent
application Ser. No. 10/152,127, filed May 20, 2002, and entitled
METHOD OF PATTERN RECOGNITION USING NOISE REDUCTION UNCERTAINTY,
and is a continuation-in-part of U.S. patent application Ser. No.
10/152,143, filed May 20, 2002, and entitled METHOD OF NOISE
REDUCTION BASED ON DYNAMIC ASPECTS OF SPEECH.
BACKGROUND OF THE INVENTION
[0002] The present invention relates to pattern recognition. In
particular, the present invention relates to performing pattern
recognition after noise reduction.
[0003] A pattern recognition system, such as a speech recognition
system, takes an input signal and attempts to decode the signal to
find a pattern represented by the signal. For example, in a speech
recognition system, a speech signal (often referred to as a test
signal) is received by the recognition system and is decoded to
identify a string of words represented by the speech signal.
[0004] To decode the incoming test signal, most recognition systems
utilize one or more models that describe the likelihood that a
portion of the test signal represents a particular pattern.
Examples of such models include Neural Nets, Dynamic Time Warping,
segment models, and Hidden Markov Models.
[0005] Before a model can be used to decode an incoming signal, it
must be trained. This is typically done by measuring input training
signals generated from a known training pattern. For example, in
speech recognition, a collection of speech signals is generated by
speakers reading from a known text. These speech signals are then
used to train the models.
[0006] In order for a model to work optimally, the signals used to
train the model should be similar to the eventual test signals that
are decoded. In particular, it is desirable that the training
signals contain the same amount and type of noise as the test
signals that are decoded.
[0007] Typically, the training signal is collected under "clean"
conditions and is considered to be relatively noise free. To
achieve this same low level of noise in the test signal, many prior
art systems apply noise reduction techniques to the testing data.
These noise reduction techniques result in a cleaned test signal
that is then used during pattern recognition. In most systems, the
noise reduction technique produces a sequence of multi-dimensional
feature vectors, with each feature vector representing a frame of a
noise-reduced signal.
[0008] Unfortunately, noise reduction techniques do not work
perfectly and as a result, there is some inherent uncertainty in
the cleaned signal. In the past, there have been two general
techniques for dealing with such uncertainty. The first has been to
ignore the uncertainty and treat the noise reduction process as
being perfect. Since this ignores the true state of the recognition
system, it results in recognition errors that could be avoided.
[0009] The other prior art technique for dealing with uncertainty
in noise reduction is to identify frames of the input signal where
the noise reduction technique is likely to have performed poorly.
In these frames, dimensions of the feature vectors that are likely
in error are marked by the noise reduction system so that they are
not used during recognition. Thus, the feature vector components
that have more than a predetermined amount of uncertainty are
completely ignored during decoding. Although such systems
acknowledge uncertainty in noise reduction, the technique of
completely ignoring a component treats the component as providing
no information that would be helpful during recognition. This is
highly unlikely because even with a significant amount of
uncertainty, the noise-reduced component still provides some
information that would be helpful during recognition.
[0010] In addition, the prior art has not provided a means for
determining the uncertainty of some noise-removal processes. As a
result, it has not been possible to determine the uncertainty
associated with those processes.
[0011] In light of this, techniques are needed to identify the
uncertainty in noise reduction and use that uncertainty during
pattern recognition.
SUMMARY OF THE INVENTION
[0012] A method and apparatus are provided for reducing noise in a
noisy signal. In particular, noise is reduced from a representation
of a portion of a noisy signal to produce a representation of a
cleaned signal by utilizing an acoustic environment model. The
uncertainty associated with the noise reduction process is then
computed. In one embodiment, the uncertainty of the noise reduction
process is used to decode a pattern state from the cleaned
signal.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] FIG. 1 is a block diagram of one computing environment in
which the present invention may be practiced.
[0014] FIG. 2 is a block diagram of an alternative computing
environment in which the present invention may be practiced.
[0015] FIG. 3 is a flow diagram of a method of using a noise
reduction system of one embodiment of the present invention.
[0016] FIG. 4 is a block diagram of a pattern recognition system in
which embodiments of the present invention may be used.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
[0017] FIG. 1 illustrates an example of a suitable computing system
environment 100 on which the invention may be implemented. The
computing system environment 100 is only one example of a suitable
computing environment and is not intended to suggest any limitation
as to the scope of use or functionality of the invention. Neither
should the computing environment 100 be interpreted as having any
dependency or requirement relating to any one or combination of
components illustrated in the exemplary operating environment
100.
[0018] The invention is operational with numerous other general
purpose or special purpose computing system environments or
configurations. Examples of well-known computing systems,
environments, and/or configurations that may be suitable for use
with the invention include, but are not limited to, personal
computers, server computers, hand-held or laptop devices,
multiprocessor systems, microprocessor-based systems, set top
boxes, programmable consumer electronics, network PCs,
minicomputers, mainframe computers, telephony systems, distributed
computing environments that include any of the above systems or
devices, and the like.
[0019] The invention may be described in the general context of
computer-executable instructions, such as program modules, being
executed by a computer. Generally, program modules include
routines, programs, objects, components, data structures, etc. that
perform particular tasks or implement particular abstract data
types. The invention may also be practiced in distributed computing
environments where tasks are performed by remote processing devices
that are linked through a communications network. In a distributed
computing environment, program modules may be located in both local
and remote computer storage media including memory storage
devices.
[0020] With reference to FIG. 1, an exemplary system for
implementing the invention includes a general-purpose computing
device in the form of a computer 110. Components of computer 110
may include, but are not limited to, a processing unit 120, a
system memory 130, and a system bus 121 that couples various system
components including the system memory to the processing unit 120.
The system bus 121 may be any of several types of bus structures
including a memory bus or memory controller, a peripheral bus, and
a local bus using any of a variety of bus architectures. By way of
example, and not limitation, such architectures include Industry
Standard Architecture (ISA) bus, Micro Channel Architecture (MCA)
bus, Enhanced ISA (EISA) bus, Video Electronics Standards
Association (VESA) local bus, and Peripheral Component Interconnect
(PCI) bus also known as Mezzanine bus.
[0021] Computer 110 typically includes a variety of computer
readable media. Computer readable media can be any available media
that can be accessed by computer 110 and includes both volatile and
nonvolatile media, removable and non-removable media. By way of
example, and not limitation, computer readable media may comprise
computer storage media and communication media. Computer storage
media includes both volatile and nonvolatile, removable and
non-removable media implemented in any method or technology for
storage of information such as computer readable instructions, data
structures, program modules or other data. Computer storage media
includes, but is not limited to, RAM, ROM, EEPROM, flash memory or
other memory technology, CD-ROM, digital versatile disks (DVD) or
other optical disk storage, magnetic cassettes, magnetic tape,
magnetic disk storage or other magnetic storage devices, or any
other medium which can be used to store the desired information and
which can be accessed by computer 110. Communication media
typically embodies computer readable instructions, data structures,
program modules or other data in a modulated data signal such as a
carrier wave or other transport mechanism and includes any
information delivery media. The term "modulated data signal" means
a signal that has one or more of its characteristics set or changed
in such a manner as to encode information in the signal. By way of
example, and not limitation, communication media includes wired
media such as a wired network or direct-wired connection, and
wireless media such as acoustic, RF, infrared and other wireless
media. Combinations of any of the above should also be included
within the scope of computer readable media.
[0022] The system memory 130 includes computer storage media in the
form of volatile and/or nonvolatile memory such as read only memory
(ROM) 131 and random access memory (RAM) 132. A basic input/output
system 133 (BIOS), containing the basic routines that help to
transfer information between elements within computer 110, such as
during start-up, is typically stored in ROM 131. RAM 132 typically
contains data and/or program modules that are immediately
accessible to and/or presently being operated on by processing unit
120. By way of example, and not limitation, FIG. 1 illustrates
operating system 134, application programs 135, other program
modules 136, and program data 137.
[0023] The computer 110 may also include other
removable/non-removable volatile/nonvolatile computer storage
media. By way of example only, FIG. 1 illustrates a hard disk drive
141 that reads from or writes to non-removable, nonvolatile
magnetic media, a magnetic disk drive 151 that reads from or writes
to a removable, nonvolatile magnetic disk 152, and an optical disk
drive 155 that reads from or writes to a removable, nonvolatile
optical disk 156 such as a CD ROM or other optical media. Other
removable/non-removable, volatile/nonvolatile computer storage
media that can be used in the exemplary operating environment
include, but are not limited to, magnetic tape cassettes, flash
memory cards, digital versatile disks, digital video tape, solid
state RAM, solid state ROM, and the like. The hard disk drive 141
is typically connected to the system bus 121 through a
non-removable memory interface such as interface 140, and magnetic
disk drive 151 and optical disk drive 155 are typically connected
to the system bus 121 by a removable memory interface, such as
interface 150.
[0024] The drives and their associated computer storage media
discussed above and illustrated in FIG. 1, provide storage of
computer readable instructions, data structures, program modules
and other data for the computer 110. In FIG. 1, for example, hard
disk drive 141 is illustrated as storing operating system 144,
application programs 145, other program modules 146, and program
data 147. Note that these components can either be the same as or
different from operating system 134, application programs 135,
other program modules 136, and program data 137. Operating system
144, application programs 145, other program modules 146, and
program data 147 are given different numbers here to illustrate
that, at a minimum, they are different copies.
[0025] A user may enter commands and information into the computer
110 through input devices such as a keyboard 162, a microphone 163,
and a pointing device 161, such as a mouse, trackball or touch pad.
Other input devices (not shown) may include a joystick, game pad,
satellite dish, scanner, or the like. These and other input devices
are often connected to the processing unit 120 through a user input
interface 160 that is coupled to the system bus, but may be
connected by other interface and bus structures, such as a parallel
port, game port or a universal serial bus (USB). A monitor 191 or
other type of display device is also connected to the system bus
121 via an interface, such as a video interface 190. In addition to
the monitor, computers may also include other peripheral output
devices such as speakers 197 and printer 196, which may be
connected through an output peripheral interface 190.
[0026] The computer 110 may operate in a networked environment
using logical connections to one or more remote computers, such as
a remote computer 180. The remote computer 180 may be a personal
computer, a hand-held device, a server, a router, a network PC, a
peer device or other common network node, and typically includes
many or all of the elements described above relative to the
computer 110. The logical connections depicted in FIG. 1 include a
local area network (LAN) 171 and a wide area network (WAN) 173, but
may also include other networks. Such networking environments are
commonplace in offices, enterprise-wide computer networks,
intranets and the Internet.
[0027] When used in a LAN networking environment, the computer 110
is connected to the LAN 171 through a network interface or adapter
170. When used in a WAN networking environment, the computer 110
typically includes a modem 172 or other means for establishing
communications over the WAN 173, such as the Internet. The modem
172, which may be internal or external, may be connected to the
system bus 121 via the user input interface 160, or other
appropriate mechanism. In a networked environment, program modules
depicted relative to the computer 110, or portions thereof, may be
stored in the remote memory storage device. By way of example, and
not limitation, FIG. 1 illustrates remote application programs 185
as residing on remote computer 180. It will be appreciated that the
network connections shown are exemplary and other means of
establishing a communications link between the computers may be
used.
[0028] FIG. 2 is a block diagram of a mobile device 200, which is
an exemplary computing environment. Mobile device 200 includes a
microprocessor 202, memory 204, input/output (I/O) components 206,
and a communication interface 208 for communicating with remote
computers or other mobile devices. In one embodiment, the
afore-mentioned components are coupled for communication with one
another over a suitable bus 210.
[0029] Memory 204 is implemented as non-volatile electronic memory
such as random access memory (RAM) with a battery back-up module
(not shown) such that information stored in memory 204 is not lost
when the general power to mobile device 200 is shut down. A portion
of memory 204 is preferably allocated as addressable memory for
program execution, while another portion of memory 204 is
preferably used for storage, such as to simulate storage on a disk
drive.
[0030] Memory 204 includes an operating system 212, application
programs 214 as well as an object store 216. During operation,
operating system 212 is preferably executed by processor 202 from
memory 204. Operating system 212, in one preferred embodiment, is a
WINDOWS.RTM. CE brand operating system commercially available from
Microsoft Corporation. Operating system 212 is preferably designed
for mobile devices, and implements database features that can be
utilized by applications 214 through a set of exposed application
programming interfaces and methods. The objects in object store 216
are maintained by applications 214 and operating system 212, at
least partially in response to calls to the exposed application
programming interfaces and methods.
[0031] Communication interface 208 represents numerous devices and
technologies that allow mobile device 200 to send and receive
information. The devices include wired and wireless modems,
satellite receivers and broadcast tuners to name a few. Mobile
device 200 can also be directly connected to a computer to exchange
data therewith. In such cases, communication interface 208 can be
an infrared transceiver or a serial or parallel communication
connection, all of which are capable of transmitting streaming
information.
[0032] Input/output components 206 include a variety of input
devices such as a touch-sensitive screen, buttons, rollers, and a
microphone as well as a variety of output devices including an
audio generator, a vibrating device, and a display. The devices
listed above are by way of example and need not all be present on
mobile device 200. In addition, other input/output devices may be
attached to or found with mobile device 200 within the scope of the
present invention.
[0033] In the discussion below, the present invention is described
with reference to speech recognition to facilitate understanding of
the invention. However, those skilled in the art will recognize
that the present invention is not limited to speech recognition and
may be used in any pattern recognition system.
[0034] The goal of speech recognition is to convert a speech signal
into a sequence of words. To do this, the speech signal is
typically divided into overlapping frames of speech and each frame
is converted into a feature vector, y, which describes the content
of the frame. During recognition, also referred to as decoding, an
acoustic model is used to identify a most likely sequence of
phonetic states that could be represented by the feature vectors.
The resulting sequence of phonetic states represents the decoded
sequence of words.
[0035] Many acoustic models evaluate a mixture of Gaussians to
determine the probability of any one phonetic state. Thus, at each
frame, a mixture of Gaussians is evaluated for each phonetic state
to identify the most likely phonetic state for the frame. The
evaluation of each Gaussian can be viewed as determining
p(y.vertline.m), which is the probability of the feature vector, y,
given the mixture component m for the phonetic state.
[0036] When using noise-reduction in combination with speech
recognition, a noisy input feature vector, y, is converted into a
cleaned feature vector, {circumflex over (x)}, by the
noise-reduction system. The cleaned feature vector represents an
estimate of an actual clean feature vector, x. The cleaned feature
vector, {circumflex over (x)}, is then applied as an input signal
to the speech recognition system, which attempts to identify a
most-likely phonetic state given the cleaned feature vector.
[0037] Without noise corruption, the observation vector, y, is the
same as the clean vector, x. As such, evaluating a Gaussian for a
mixture component for a clean observation vector p(y.vertline.m) is
the same as evaluating the probability of clean vector x given
mixture component m, p(x.vertline.m).
[0038] However, when the observation vector is corrupted by noise
and a noise-reduction system is used, the input to the recognizer
is no longer the ideal clean vector, x, but instead is the cleaned
feature vector, {circumflex over (x)}, which is only an estimate of
x. In the past, it has been common to simply treat the cleaned
feature vector, {circumflex over (x)}, as if it were equal to the
ideal clean feature vector, x. In such cases, evaluating a Gaussian
for a mixture component p(y.vertline.m) has been approximated by
determining the probability p({circumflex over
(x)}.vertline.m).
[0039] Under the present invention, a more rigorous approach is
used in which the probability p(y.vertline.m) is determined by
marginalizing a joint conditional probability density function
p(y,x.vertline.m) over all possible unseen clean speech vectors. In
terms of an equation: 1 p ( y m ) = - .infin. .infin. p ( y , x m )
x EQ . 1
[0040] The right-hand side of EQ. 1 may be expanded so that EQ. 1
becomes: 2 p ( y m ) = - .infin. .infin. p ( y x , m ) p ( x m ) x
EQ . 2
[0041] To make the computations associated with EQ. 2 easier to
implement, the probability p(y.vertline.x,m) is assumed to be
independent of m so that it becomes p(y.vertline.x). This
probability is then modeled as a Gaussian distribution that
describes the certainty of the noise reduction process. In
particular: 3 p ( y x ) = N ( x ^ ; x , x ^ 2 ) EQ . 3
[0042] where the distribution is defined across the cleaned feature
vectors, {circumflex over (x)}, and has a mean equal to the ideal
clean feature vector, x, and a variance, 4 x ^ 2 ,
[0043] that represents the variance associated with the noise
reduction process. Note that in EQ. 3 a simplified notation has
been used to represent the Gaussian distribution as a single
Gaussian across the entire vector. In fact, there is a separate
Gaussian, with its own mean and variance, for each dimension of the
feature vector. For example: 5 p ( y l x l ) = N ( x ^ l ; x l , x
^ , l 2 ) EQ . 4
[0044] where y.sub.l is component l of the noisy feature vector,
x.sub.l is component l of the clean feature vector, {circumflex
over (x)}.sub.l is component l of the cleaned feature vector, and 6
x ^ , l 2
[0045] is the variance (also referred to as uncertainty) associated
with the noise reduction process for component l of the feature
vectors.
[0046] The prior probability, p(x.vertline.m) is also modeled as a
collection of Gaussian distributions, one for each component of the
vector, such that: 7 p ( x l m ) = N ( x l ; m , l , m , l 2 ) EQ .
5
[0047] where .mu..sub.m,l is the mean of the prior distribution,
and 8 m , l 2
[0048] is the variance of the distribution for component l of the
feature vectors.
[0049] Combining equations 2, 4 and 5 and evaluating the integral
provides: 9 p ( y l m ) = N ( x ^ l ; m , l , m , l 2 + x ^ , l 2 )
EQ . 6
[0050] Thus, under the framework of the present invention, the
variance of the distribution used in the acoustic model is a
combination of the variance associated with the prior model, 10 m ,
l 2 ,
[0051] and the variance or uncertainty associated with the noise
reduction process, 11 x ^ , l 2 .
[0052] As a result, if the uncertainty associated with the noise
reduction process is high for a component, there will be little
difference between the probabilities generated for each phonetic
state. As a result, the component will have little effect on the
decoding of the phonetic state sequence. However, if the
uncertainty of the noise reduction process is small, the acoustic
models for the various phonetic states will generate distinctive
probabilities for the component and the component will thereby
strongly influence the selection of the phonetic state.
[0053] Under one embodiment of the present invention, the noise
reduction process utilizes a prior model of dynamic aspects of
clean speech, a prior model of static aspects of clean speech, and
an acoustic-distortion or acoustic-environmental model that
describes the relationship between clean speech, noisy speech and
noise. The invention thus provides a technique for determining the
uncertainty in this parametric-based noise reduction process.
[0054] In the discussion below, the noise reduction method is
performed in the cepstral domain. In particular, each frame of
clean speech, noisy speech, and noise, is represented by a cepstral
feature vector that is formed by taking the logarithm and discrete
cosine transform of the frequency-domain representation of the
signal within the frame. Thus, in the discussion below, the method
attempts to identify a noise-reduced feature vector from a noisy
speech feature vector. As with Equation 3 above, a simplified
notation is used below that treats each vector as being represented
by a single distribution. In implementation, there is a separate
distribution for each component of the vector.
[0055] To reduce the noise from the noisy vector, the minimum mean
square estimate of the noise-reduced vector is determined using the
conditional expectation:
{circumflex over (x)}.sub.t=E[x.sub.t.vertline.y.sub.t,{circumflex
over
(x)}.sub.t-1]=.intg.x.sub.tp(x.sub.t.vertline.y.sub.t,{circumflex
over (x)}.sub.t-1)dx.sub.t EQ. 7
[0056] where {circumflex over (x)}.sub.t is the estimate for the
noise-reduced feature vector of the current frame, {circumflex over
(x)}.sub.t-1 is the estimate for the noise-reduced feature vector
determined for a previous frame, y.sub.t is a noisy speech feature
vector for the current frame, and x.sub.t is a clean speech feature
vector for the current frame.
[0057] Using Bayes rule, the estimate in Equation 1 can be restated
as: 12 x ^ t = x t p ( x t x ^ t - 1 ) p ( y t x t ) x t p ( y t )
EQ . 8
[0058] where p(x.sub.t.vertline.{circumflex over (x)}.sub.t-1) is
the conditional probability of a clean speech vector, x.sub.t,
given the estimate of the clean speech vector at the preceding
frame, {circumflex over (x)}.sub.t-1, p(y.sub.t.vertline.x.sub.t)
is the conditional probability of a noisy speech vector, y.sub.t,
given a clean speech vector, x.sub.t, and p(y.sub.t) is the
probability of a noisy speech vector, y.sub.t.
[0059] Under one embodiment of the present invention, a mixture
model is used to describe the conditional probability
p(x.sub.t.vertline.{circumfl- ex over (x)}.sub.t-1). In terms of an
equation: 13 p ( x t x ^ t - 1 ) = m = 1 M c m p ( x t m , x ^ t -
1 ) EQ . 9
[0060] where the conditional prior probability
p(x.sub.t.vertline.m,{circu- mflex over (x)}.sub.t-1) is the
probability provided by the mth mixture component and c.sub.m is a
weighting factor for the mth mixture component.
[0061] Under one embodiment, noisy speech is considered to be the
result of noise being added to clean speech. This provides an
acoustic environment model otherwise known as a distortion model
for the noisy speech. In the time domain, this addition is linear.
However, in the cepstral feature domain, this addition becomes
non-linear resulting in a feature vector relationship of:
y.sub.t=x.sub.t+g(n.sub.t-x.sub.t)+r EQ. 10
[0062] where
g(n.sub.t-x.sub.t)=log(1+e.sup.n.sup..sub.t.sup.-x.sup..sub.t) EQ.
11
[0063] and r is a residual.
[0064] To make the expression of Equation 10 easier to use, the
non-linearity is linearized using a truncated Taylor series
approximation. In particular, a zero-th order Taylor series
expansion on g(n.sub.t-x.sub.t) at x.sub.t=x.sub.0 is used to form
the approximation:
y.sub.t.apprxeq.x.sub.t+g({overscore (n)}-x.sub.0)+r EQ. 12
[0065] where g({overscore (n)}-x.sub.0) is the zero-th order Taylor
series expansion at expansion point x.sub.0, and {overscore (n)} is
an estimate of the noise in the current frame. In most embodiments,
the expansion point, x.sub.0, is selected as the estimate of the
clean speech signal determined during a previous iteration of the
noise-reduction process, and {overscore (n)} is determined using a
recursive technique such as the one disclosed in a pending U.S.
patent application entitled METHOD OF ITERATIVE NOISE ESTIMATION IN
A RECURSIVE FRAMEWORK, filed on Apr. 5, 2002 and having Ser. No.
10/116,792. Those skilled in the art will recognize that the
recursive technique for noise estimation is one example of a number
of different noise estimation techniques that may be utilized with
the present invention.
[0066] Based on the relationship shown in Equation 12, the
conditional probability p(y.sub.t.vertline.x.sub.t) of Equation 8
becomes an acoustic environment probability
p(y.sub.t.vertline.x.sub.t,{overscore (n)}.sub.t). Using this
acoustic environment probability and Equation 9 above, Equation 8
can be re-written as: 14 x ^ t = m = 1 M c m x t p ( x t m , x ^ t
- 1 ) p ( y t x t , n _ t ) x t p ( y t ) EQ . 13
[0067] To compute the integral of Equation 13, the conditional
prior, p(x.sub.t.vertline.m,{circumflex over (x)}.sub.t-1), is
first evaluated. To make this evaluation easier, it is recognized
that:
p(x.sub.t.vertline.m,{circumflex over
(x)}.sub.t-1).varies.p(x.sub.t,.DELT- A.x.sub.t.vertline.m) EQ.
14
[0068] where
.DELTA.x.sub.t=x.sub.t-{circumflex over (x)}.sub.t-1 EQ. 15
[0069] Thus, in Equation 14, x.sub.t represents the static aspects
of clean speech and .DELTA.x.sub.t represents the dynamic aspects
of clean speech.
[0070] To further simplify the evaluation of the conditional prior,
it is assumed that the static aspects of clean speech are
uncorrelated to the dynamic aspects of clean speech. As a
result:
p(x.sub.t,.DELTA.x.sub.t.vertline.m)=p(x.sub.t.vertline.m)p(.DELTA.x.sub.t-
.vertline.m) EQ. 16
[0071] Under one embodiment, the prior probability for the static
aspects of clean speech and the prior probability for the dynamic
aspects of clean speech are each modeled as Gaussian distributions
such that: 15 p ( x t , x t m ) = N ( x t ; m x , m x ) N ( x t ; m
x , m x ) EQ . 17
[0072] where .mu..sub.m.sup.x and .PHI..sub.m.sup.x are the mean
and variance of the prior model for the static aspects of clean
speech, and .mu..sub.m.sup..DELTA.x and .PHI..sub.m.sup..DELTA.x
are the mean and variance of the prior model for the dynamic
aspects of clean speech.
[0073] Fitting the exponent in the product of the above two
Gaussian distributions into the standard quadratic form in x.sub.t,
and using Equation 14 above, produces:
p(x.sub.t.vertline.m,{circumflex over (x)}.sub.t-1)=N(x.sub.t;
.mu..sub.m, .PHI..sub.m) EQ. 18
[0074] where 16 m = m x ( m x + m x ) m x + m x ( m x + m x ) ( x t
- 1 + m x ) EQ . 19 and m = m x m x ( m x + m x ) EQ . 20
[0075] In many embodiments, the acoustic environment probability,
p(y.sub.t.vertline.x.sub.t,{overscore (n)}.sub.t), is also modeled
as a Gaussian distribution producing an acoustic model of the
form:
p(y.sub.t.vertline.x.sub.t,n.sub.t)=N(y.sub.t;x.sub.t+g({overscore
(n)}-x.sub.0),.psi.) EQ. 21
[0076] where .psi. is a fixed diagonal covariance matrix that is an
approximation to the covariance matrix for the residual r.
[0077] Thus, the integral in the numerator of Equation 13
becomes:
I.sub.m=.intg.x.sub.tN(x.sub.t;.mu..sub.m,.PHI..sub.m)N(y.sub.t;x.sub.t+g(-
{overscore (n)}-x.sub.0),.psi.)dx.sub.t EQ. 22
[0078] Evaluating the integral produces: 17 I m = [ v 1 ( m ) m x +
v 2 ( m ) ( x t - 1 + m x ) + w 2 ( m ) ( y t - g ( n _ - x 0 ) ) ]
N m ( y t ) EQ . 23 where N m ( y t ) = N ( y t ; m x + g ( n _ - x
0 ) , m x + ) EQ . 24 v 1 ( m ) = ( m x + ) m x ( m x + m x ) EQ .
25 v 2 ( m ) = ( m x + ) m x ( m x + m x ) EQ . 26 w 2 ( m ) = I -
( m x + ) EQ . 27
[0079] The Gaussian distribution in Equation 25 can be shown to be
the likelihood of observation y.sub.t given the m-th component in
the clean speech model under the zero-th order approximation made
in Equation 15. That is:
p(y.sub.t.vertline.m).apprxeq.N.sub.m(y.sub.t) EQ. 28
[0080] As a result, the denominator in Equation 13 can be
determined as: 18 p ( y t ) = m = 1 M c m p ( y t m ) m = 1 M c m N
m ( y t ) EQ . 29
[0081] Combining Equations 13, 23, and 29 produces the estimator
for x.sub.t: 19 x t = m = 1 M m ( y t ) [ v 1 ( m ) m x + v 2 ( m )
( x t - 1 + m x ) + w 2 ( m ) ( y t - g ( n _ - x 0 ) ) ] EQ .
30
[0082] where 20 m ( y t ) = c m N m ( y t ) m = 1 M c m N m ( y t )
EQ . 31
[0083] Each summand in Equation 30 is a mixture component
contribution to the final reduced-noise vector. Each summand is
formed by multiplying the weighted sum of three terms by the
relative probability of the noisy feature vector, y.sub.t,
occurring given the mixture component. The first term is the
prediction of the reduced-noise vector taken from the mean of the
static aspects of the prior clean speech model, 21 m x .
[0084] The second term is the prediction for the reduced-noise
vector based on the estimate of the reduced-noise vector for the
preceding frame and the mean of the dynamic aspects of the prior
clean speech model, 22 ( x i - 1 + m x ) .
[0085] The last term, y.sub.t-g({overscore (n)}-x.sub.0), is the
prediction for the noise-reduced clean-speech vector based on the
acoustic distortion model in absence of any prior information.
[0086] Thus, the estimator of Equation 30 utilizes a prior model of
static aspects of clean speech, a prior model of dynamic aspects of
clean speech and an acoustic distortion model. As a result, the
sequence of noise-reduced vectors produced using the estimator of
Equation 30 contain fewer discontinuities and appear more
natural.
[0087] In addition, the weighting values applied to each of the
terms in each summand are such that as the variance of one of the
terms increases, its relative weighting decreases. Thus, as a term
becomes more accurate in its description of the noise-reduced
value, its weighting increases relative to the weighting of the
other terms.
[0088] For example, the weighting value applied to the value
predicted by the prior model for static aspects of clean speech is
a function of the variance of the prior model for dynamic aspects
of clean speech. As a result, when the variance of the model for
the dynamic aspects of cleans speech increases relative to the
variance of the model for the static aspects of clean speech, the
relative weighting of the value predicted by the prior model for
static aspects increases.
[0089] A similar effect occurs for the weighting of the value
predicted by the prior model for the dynamic aspects of clean
speech, since this weighting value is a function of the variance of
the prior model for the static aspects of clean speech.
[0090] Before the estimator of Equation 31 can be used, the
parameters for the various models must be trained. Under one
embodiment, an EM algorithm is used to train the mean and
covariance parameters 23 m x , m x , m x and m x
[0091] in the cepstral domain. During this training process, the
mixture component weights, c.sub.m, are also trained. Techniques
for performing such training are well known in the art. The fixed
covariance matrix, .psi., is estimated by pooling the training data
with all available signal to noise ratios.
[0092] Under one embodiment, the noise reduced vector, x.sub.t, is
determined in an iterative fashion such that the value of the
noise-reduced vector in a current iteration is used as the
expansion point, x.sub.0, of the Taylor Series approximation,
g({overscore (n)}-x.sub.0), in the next iteration. For the first
iteration, the expansion point for the Taylor Series approximation
is the mixture component mean, 24 m x ,
[0093] that provides the highest probability for the noisy vector
using the normal distribution: 25 N ( y t ; m x + g ( n _ - x 0 ) ,
) EQ . 32
[0094] At each iteration, the noise-reduced vector is then
calculated as: 26 x ^ t j + 1 = m = 1 M m l ( y t ) [ v 1 ( m ) m x
+ v 2 ( m ) ( x ^ t - 1 + m x ) + w 2 ( m ) ( y t - g ( n _ - x ^ t
j ) ) ] EQ . 33
[0095] where j represents an iteration counter and 27 m j ( y t ) =
c m N ( y t ; m x + g ( n _ - x ^ t j ) , m x + ) m = 1 M c m N ( y
t ; m x + g ( n _ - x ^ t j ) , m x + ) EQ . 34
[0096] Thus, the expectation .mu..sub.{circumflex over
(x)}.sub..sub.t, for the noise-reduced vector is: 28 x ^ t = x ^ t
j EQ . 35
[0097] where J is the total number of iterations.
[0098] Under the present invention, the uncertainty associated with
the noise reduction process represented by the iteration of
equation 33 is also calculated so that it can be used during
recognition. The uncertainty is the variance
.SIGMA..sub.{circumflex over (x)}.sub..sub.t in
.mu..sub.{circumflex over (x)}.sub..sub.t, which is defined as: 29
x ^ t = E [ x t 2 | y t ] - x t ^ 2 where EQ . 36 E [ x t 2 | y t ]
m = 1 M c m x t 2 p ( x t | m , x t - 1 ) p ( y t | x t , n t ) x t
I m ( y t ) p ( y t ) EQ . 37
[0099] Using the approximations and models of equations 14-21
above, the integral I.sub.m, becomes: 30 I m = x t 2 N ( x t ; m ,
m ) N ( y t ; x t + g ( n _ - x 0 ) , ) x = x t 2 N ( x t ; m ( t )
, ( m + ) - 1 m ) x t .times. N m ( y t ) = [ ( m + ) - 1 m + m 2 (
t ) ] .times. N m ( y t ) EQ . 38
[0100] where 31 m = m x ( m x + m x ) m x + m x ( m x + m x ) ( x t
- 1 + m x ) EQ . 39 m = m x m x ( m x + m x ) EQ . 40
.theta..sub.m(t)=(.PHI..sub.m+.psi.).sup.-1[.psi..mu..sub.m+.PHI..sub.m(y.-
sub.t-g({overscore (n)}-x.sub.0))] EQ. 41
N.sub.m(y.sub.t)=N(y.sub.t;.mu..sub.m+g({overscore
(n)}-x.sub.0),.PHI..sub- .m+.psi.) EQ. 42
[0101] Substituting the result of equation 38 into equation 37 we
obtain:
[0102] where 32 E [ x t 2 | y t ] m = 1 M m ( y t ) [ ( m + ) - 1 m
+ m 2 ( t ) ] EQ . 43 m ( y t ) = c m N m ( y t ) m = 1 M c m N m (
y t ) EQ . 44
[0103] The value calculated in Equation 43 is then used together
with the square of the expectation for the noise reduced vector
calculated in equations 33-35 to determine the uncertainty of the
noise reduction process using equation 36 above. Note that in
Equation 36, .SIGMA..sub.{circumflex over (x)}.sub..sub.t is a
vector representation of the uncertainty and that it represents a
separate uncertainty .sigma..sub.{circumflex over (x)},l.sup.2 for
each component of the static noise reduced vector
.mu..sub.{circumflex over (x)}.sub..sub.t.
[0104] Equations 33-35 above determine the static components of the
noise reduced vector. Under embodiments of the present invention,
differentials of these static components that describe how the
components change between frames can also be determined. These
differentials are referred to generically as dynamic features.
Under the present invention, two types of dynamic features are
determined. One is the delta feature that indicates changes between
frames, and the other is the acceleration feature that indicates
the rate of change of the delta features. Under one embodiment, the
delta features are calculated as:
[0105] 33 x ^ t = = - K K x ^ t + EQ . 45
[0106] where .DELTA.{circumflex over (x)}.sub.t is the dynamic
feature for the noise reduced vector, K indicates the number of
neighboring frames that are to be used to determine the dynamic
features, and .omega..sub..tau. are fixed weights that provide
relative weighting to the neighboring frames. Note that the vector
notation is used in equation 45 to indicate that a separate
calculation is performed for each component of the noise-reduced
feature vectors.
[0107] Acceleration features can also be determined based on the
delta features calculated in Equation 45. Specifically:
[0108] 34 2 x ^ t = = - K K x ^ t + EQ . 46
[0109] where .DELTA..sup.2{circumflex over (x)}.sub.t is the
acceleration feature for the noise reduced vector, K indicates the
number of neighboring frames that are to be used to determine the
dynamic features, and .upsilon..sub..tau. are fixed weights that
provide relative weighting to the neighboring frames.
[0110] The delta and acceleration features are provided to the
decoder along with the static noise-reduced features. In addition,
the uncertainty of the delta and acceleration features are
calculated and provided to the decoder. Specifically the
uncertainty of the delta and acceleration features are calculated
as: 35 x ^ t = = - K K 2 x ^ t EQ . 47 36 2 x ^ t = = - K K 2 x ^ t
EQ . 48
[0111] where .SIGMA..sub.{circumflex over (x)}.sub..sub.t is the
uncertainty calculated in Equation 36 above,
.SIGMA..sub..DELTA.{circumfl- ex over (x)}.sub..sub.t is the
uncertainty in the delta features,
.SIGMA..sub..DELTA..sup..sub.2.sub.{circumflex over (x)}.sub..sub.t
is the uncertainty in the acceleration features and
.omega..sub..tau. and .upsilon..sub..tau. are the same weights used
in Equations 45 and 46.
[0112] The uncertainty of the static features, the delta features
and the acceleration features are each used in Equation 6 to
determine the probability of corresponding features of the noisy
feature vector during decoding. In particular, each component of
the delta feature and acceleration feature calculated in Equations
45 and 46 is treated as a separate component of the noise-reduced
vector and the corresponding uncertainty associated with each
component of the delta vector and each acceleration vector is used
as an uncertainty .sigma..sub.{circumflex over (x)},l.sup.2 in
Equation 6.
[0113] A method and system for using the present invention in
speech recognition is shown in the flow diagram of FIG. 3 and the
block diagram of FIG. 4. The method begins in step 300 of FIG. 3,
where a noisy speech signal is converted into a sequence of feature
vectors. To do this, a microphone 404 of FIG. 4, converts audio
waves from a speaker 400 and one or more additive noise sources 402
into electrical signals. The electrical signals are then sampled by
an analog-to-digital converter 406 to generate a sequence of
digital values, which are grouped into frames of values by a frame
constructor 408. In one embodiment, A-to-D converter 406 samples
the analog signal at 16 kHz and 16 bits per sample, thereby
creating 32 kilobytes of speech data per second and frame
constructor 408 creates a new frame every 10 milliseconds that
includes 25 milliseconds worth of data.
[0114] Each frame of data provided by frame constructor 408 is
converted into a feature vector by a feature extractor 410. Methods
for identifying such feature vectors are well known in the art and
include 13-dimensional Mel-Frequency Cepstrum Coefficients (MFCC)
extraction.
[0115] In step 302 of FIG. 3, the feature vectors for the noisy
speech signal are provided to a noise estimation module 411 in FIG.
4. Noise estimation module 411 estimates the noise in the current
frame and provides a feature vector representing the noise estimate
together with the noisy speech signal to a noise reduction module
412.
[0116] In step 304 of FIG. 3, noise reduction module 412 uses
Equations 33-35, 45 and 46 above and the model parameters of those
equations, which are stored in noise reduction parameter storage
413, to produce a sequence of noise-reduced feature vectors from
the sequence of noisy feature vectors. In particular, noise
reduction module 412 uses an acoustic distortion model as described
above. In addition, noise reduction module 412 uses equations
36-44, 47 and 48 to determine an uncertainty associated with
forming the noise reduced feature vectors.
[0117] The output of noise reduction module 412 is a series of
noise-reduced feature vectors and a corresponding series of
uncertainties associated with those noise-reduced feature vectors.
If the input signal is a training signal, the series of
noise-reduced feature vectors are provided to a trainer 424, which
uses the noise-reduced feature vectors and a training text 426 to
train an acoustic model 418. Techniques for training such models
are known in the art and a description of them is not required for
an understanding of the present invention.
[0118] If the input signal is a test signal, the noise-reduced
feature vectors and the corresponding uncertainties are provided to
a decoder 414, which identifies a most likely sequence of words
based on the stream of feature vectors, the stream of
uncertainties, a lexicon 415, a language model 416, and the
acoustic model 418 as shown by step 306 of FIG. 3. Specifically,
the decoding utilizes equation 6 above.
[0119] The most probable sequence of hypothesis words is provided
to a confidence measure module 420. Confidence measure module 420
identifies which words are most likely to have been improperly
identified by the speech recognizer, based in part on a secondary
acoustic model (not shown). Confidence measure module 420 then
provides the sequence of hypothesis words to an output module 422
along with identifiers indicating which words may have been
improperly identified. Those skilled in the art will recognize that
confidence measure module 420 is not necessary for the practice of
the present invention.
[0120] Although FIG. 4 depicts a speech recognition system, the
present invention may be used in any pattern recognition system and
is not limited to speech.
[0121] Although the present invention has been described with
reference to particular embodiments, workers skilled in the art
will recognize that changes may be made in form and detail without
departing from the spirit and scope of the invention.
* * * * *