Voicing index controls for CELP speech coding

Gao, Yang

Patent Application Summary

U.S. patent application number 10/799503 was filed with the patent office on 2004-09-16 for voicing index controls for celp speech coding. This patent application is currently assigned to Mindspeed Technologies, Inc.. Invention is credited to Gao, Yang.

Application Number20040181411 10/799503
Document ID /
Family ID33029999
Filed Date2004-09-16

United States Patent Application 20040181411
Kind Code A1
Gao, Yang September 16, 2004

Voicing index controls for CELP speech coding

Abstract

An approach for improving quality of speech synthesized using analysis-by-synthesis (ABS) coders is presented. An unstable perceptual quality in analysis-by-synthesis type speech coding (e.g. CELP) may occur because the periodicity degree in a voiced speech signal may vary significantly for different segments of the voiced speech. Thus, the present invention uses a voicing index, which may indicate the periodicity degree of the speech signal, to control and improve ABS type speech coding. The voicing index may be used to improve the quality stability by controlling encoder and/or decoder in: fixed-codebook short-term enhancement including the spectrum tilt; perceptual weighting filter; sub-fixed codebook determination; LPC interpolation; fixed-codebook pitch enhancement; post-pitch enhancement; noise injection into the high-frequency band at decoder; LTP Sinc window; signal decomposition, etc.


Inventors: Gao, Yang; (Mission Viejo, CA)
Correspondence Address:
    FARJAMI & FARJAMI LLP
    Suite 360
    26522 La Alameda Avenue
    Mission Viejo
    CA
    92691
    US
Assignee: Mindspeed Technologies, Inc.

Family ID: 33029999
Appl. No.: 10/799503
Filed: March 11, 2004

Related U.S. Patent Documents

Application Number Filing Date Patent Number
60455435 Mar 15, 2003

Current U.S. Class: 704/262 ; 704/E19.028; 704/E19.035; 704/E19.046; 704/E21.011
Current CPC Class: G10L 19/12 20130101; G10L 19/09 20130101; G10L 19/087 20130101; G10L 19/005 20130101; G10L 21/0232 20130101; G10L 21/038 20130101; G10L 21/0208 20130101; G10L 19/20 20130101; G10L 19/265 20130101; G10L 25/90 20130101
Class at Publication: 704/262
International Class: G10L 019/04

Claims



What is claimed is:

1. A method of improving synthesized speech quality comprising: obtaining an input speech signal; coding said input speech using a Code Excited Linear Prediction coder to generate code parameters for synthesis of said input speech; and using a voicing index representing a characteristic of said input speech in enhancing said synthesis of said input speech.

2. The method of claim 1, wherein said characteristic of said input speech is periodicity of said input speech.

3. The method of claim 1, wherein said enhancing said synthesis of said input speech is by controlling an adaptive highpass filter with said voicing index to enhance high frequency region during said coding.

4. The method of claim 1, wherein said enhancing said synthesis of said input speech is by controlling an adaptive perceptual weighting filter in said Code Excited Linear Prediction coder with said voicing index.

5. The method of claim 1, wherein said enhancing said synthesis of said input speech is by controlling an adaptive Sinc window used in said Code Excited Linear Prediction coder for pitch contribution with said voicing index.

6. The method of claim 1, wherein said enhancing said synthesis of said input speech is by controlling spectrum tilt of said input speech by short-term enhancement of a fixed-codebook of said Code Excited Linear Prediction coder with said voicing index.

7. The method of claim 1, wherein said enhancing said synthesis of said input speech is by controlling a perceptual weighting filter of said Code Excited Linear Prediction coder with said voicing index.

8. The method of claim 1, wherein said enhancing said synthesis of said input speech is by controlling a linear prediction coder of said Code Excited Linear Prediction coder with said voicing index.

9. The method of claim 1, wherein said enhancing said synthesis of said input speech is by controlling a pitch enhancement fixed-codebook of said Code Excited Linear Prediction coder with said voicing index.

10. The method of claim 1, wherein said enhancing said synthesis of said input speech is by controlling post pitch enhancement of said Code Excited Linear Prediction coder with said voicing index.

11. The method of claim 1, wherein said voicing index selects at least one sub-codebook from a plurality of sub-codebooks of said Code Excited Linear Prediction coder based on said characteristic of said input speech signal.

12. A method of improving synthesized speech quality comprising: obtaining code parameters of an input speech signal; obtaining a voicing index for use in enhancing synthesis of said input speech signal from said code parameters; and processing said code parameters through a Code Excited Linear Prediction coder using information provided by said voicing index to generate a synthesized version of said input speech signal.

13. The method of claim 12, wherein said voicing index provides periodicity of said input speech signal.

14. The method of claim 12, wherein said voicing index provides characteristics of an adaptive highpass filter used to enhance high frequency region of said excitation during generation of said code parameters for said input speech.

15. The method of claim 12, wherein said voicing index provides characteristics of an adaptive perceptual weighting filter used to enhance perceptual quality of said input speech during generation of said code parameters for said input speech.

16. The method of claim 12, wherein said voicing index provides characteristics of an adaptive Sinc window for pitch contribution used to enhance perceptual quality of said input speech during generation of said code parameters for said input speech.

17. The method of claim 12, wherein said enhancing synthesis of said input speech is by controlling spectrum tilt of said input speech by short-term enhancement of a fixed-codebook of said Code Excited Linear Prediction coder with said voicing index.

18. The method of claim 12, wherein said enhancing of said synthesis of said input speech is by controlling a linear prediction coder filter of said Code Excited Linear Prediction coder with said voicing index.

19. The method of claim 12, wherein said enhancing of said synthesis of said input speech is by controlling a pitch enhancement fixed-codebook of said Code Excited Linear Prediction coder with said voicing index.

20. The method of claim 12, wherein said enhancing said synthesis of said input speech is by controlling post pitch enhancement of said Code Excited Linear Prediction coder with said voicing index.

21. The method of claim 12, wherein said voicing index selects at least one sub-codebook from a plurality of sub-codebooks of said Code Excited Linear Prediction coder based on said characteristic of said input speech signal.

22. An apparatus for improving synthesized speech quality comprising: an input speech signal; a Code Excited Linear Prediction coder for coding said input speech signal to generate code parameters for synthesis of said input speech; and a voicing index having a characteristic of said input speech for use in enhancing said synthesis of said input speech.

23. The apparatus of claim 22, wherein said characteristic of said input speech is periodicity of said input speech.

24. The apparatus of claim 22, wherein said characteristic of said input speech is a characteristic of an adaptive highpass filter used to enhance high frequency region of said excitation during said coding.

25. The apparatus of claim 22, wherein said characteristic of said input speech is a characteristic of an adaptive perceptual weighting filter used in said Code Excited Linear Prediction coder.

26. The apparatus of claim 22, wherein said characteristic of said input speech is a characteristic of an adaptive Sinc window used in said Code Excited Linear Prediction coder.

27. The apparatus of claim 22, wherein said voicing index selects at least one sub-codebook from a plurality of sub-codebooks of said Code Excited Linear Prediction coder based on said characteristic of said input speech signal.

28. An apparatus for improving synthesized speech quality comprising: a set of code parameters of an input speech signal; a voicing index for use in enhancing synthesis of said input speech signal from said code parameters; and a Code Excited Linear Prediction coder using said code parameters and information provided by said voicing index to generate a synthesized version of said input speech signal.

29. The apparatus of claim 28, wherein said voicing index provides periodicity of said input speech signal.

30. The apparatus of claim 28, wherein said voicing index provides characteristics of a highpass filter used to enhance high frequency region of said excitation during generation of said code parameters for said input speech.

31. The apparatus of claim 28, wherein said voicing index provides characteristics of an adaptive perceptual weighting filter used to enhance perceptual quality of said input speech during generation of said code parameters for said input speech.

32. The apparatus of claim 28, wherein said voicing index provides characteristics of an adaptive Sinc window used to enhance perceptual quality of said input speech during generation of said code parameters for said input speech.

33. The apparatus of claim 28, wherein said voicing index selects at least one sub-codebook from a plurality of sub-codebooks of said Code Excited Linear Prediction coder based on characteristics of said input speech signal.

34. A method of improving synthesized speech quality comprising: generating a plurality of frames from an input speech signal; coding each frame of said plurality of frames using a Code Excited Linear Prediction coder to generate code parameters for synthesis of said each frame of said input speech; and transmitting a voicing index having a plurality of bits indicative of a classification of said each frame of said input speech.

35. The method of claim 34, wherein said plurality of bits are three bits.

36. The method of claim 34, wherein said classification is indicative of periodicity of said input speech signal.

37. The method of claim 34, wherein said classification is indicative of an irregular voiced speech signal.

38. The method of claim 34, wherein said classification is indicative of a periodic index.

39. The method of claim 38, wherein said periodic index ranges from low periodic index to high periodic index.

40. A method of improving synthesized speech quality comprising: receiving a frame of an input speech signal, said frame having a plurality of code parameters and a voicing index, wherein said voicing index comprises a plurality of bits; determining a classification for said frame of said input speech signal from said plurality of bits of said voicing index; and decoding said frame using a Code Excited Linear Prediction coder based on said classification to synthesize said input speech.

41. The method of claim 40, wherein said plurality of bits are three bits.

42. The method of claim 40, wherein said classification is indicative of a noisy speech signal.

43. The method of claim 40, wherein said classification is indicative of an irregular voiced speech signal.

44. The method of claim 40, wherein said classification is indicative of a periodic index.

45. The method of claim 44, wherein said periodic index ranges from low periodic index to high periodic index.
Description



RELATED APPLICATIONS

[0001] The present application claims the benefit of U.S. provisional application serial No. 60/455,435, filed Mar. 15, 2003, which is hereby fully incorporated by reference in the present application.

[0002] The following co-pending and commonly assigned U.S. patent applications have been filed on the same day as this application, and are incorporated by reference in their entirety:

[0003] U.S. patent application Ser. No. ______, "SIGNAL DECOMPOSITION OF VOICED SPEECH FOR CELP SPEECH CODING," Attorney Docket Number: 0160112.

[0004] U.S. patent application Ser. No. ______, "SIMPLE NOISE SUPPRESSION MODEL," Attorney Docket Number: 0160114.

[0005] U.S. patent application Ser. No. ______, "ADAPTIVE CORRELATION WINDOW FOR OPEN-LOOP PITCH," Attorney Docket Number: 0160115.

[0006] U.S. patent application Ser. No. ______, "RECOVERING AN ERASED VOICE FRAME WITH TIME WARPING," Attorney Docket Number: 0160116.

BACKGROUND OF THE INVENTION

[0007] 1. Field of the Invention

[0008] The present invention relates generally to speech coding and, more particularly, to Code Excited Linear Prediction (CELP) speech coding.

[0009] 2. Related Art

[0010] Generally, a speech signal can be band-limited to about 10 kHz without affecting its perception. However, in telecommunications, the speech signal bandwidth is usually limited much more severely. It is known that the telephone network limits the bandwidth of the speech signal to between 300 Hz to 3400 Hz, which is known as the "narrowband". Such band-limitation results in the characteristic sound of telephone speech. Both the lower limit at 300 Hz and the upper limit at 3400 Hz affect the speech quality.

[0011] In most digital speech coders, the speech signal is sampled at 8 kHz, resulting in a maximum signal bandwidth of 4 kHz. In practice, however, the signal is usually band-limited to about 3600 Hz at the high-end. At the low-end, the cut-off frequency is usually between 50 Hz and 200 Hz. The narrowband speech signal, which requires a sampling frequency of 8 kb/s, provides a speech quality referred to as toll quality. Although this toll quality is sufficient for telephone communications, for emerging applications such as teleconferencing, multimedia services and high-definition television, an improved quality is necessary.

[0012] The communications quality can be improved for such applications by increasing the bandwidth. For example, by increasing the sampling frequency to 16 kHz, a wider bandwidth, ranging from 50 Hz to about 7000 Hz can be accommodated, which is referred to as the "wideband". Extending the lower frequency range to 50 Hz increases naturalness, presence and comfort. At the other end of the spectrum, extending the higher frequency range to 7000 Hz increases intelligibility and makes it easier to differentiate between fricative sounds.

[0013] Digitally, speech is synthesized by a well-known approach known as Analysis-By-Synthesis (ABS). Analysis-By-Synthesis is also referred to as closed-loop approach or waveform-matching approach. It offers relatively better speech coding quality than other approaches for medium to high bit rates. A known ABS approach is the so-called Code Excited Linear Prediction (CELP). In CELP coding, speech is synthesized by using encoded excitation information to excite a linear predictive coding (LPC) filter. The output of the LPC filter is compared against the voiced speech and used to adjust the filter parameters in a closed loop sense until the best parameters based upon the least error is found. One of the facts influencing CELP coding is that voicing degree can significantly vary for different voiced speech segments thus causing an unstable perceptual quality in the speech coding.

[0014] The present invention addresses the above analysis-by-synthesis voiced speech issue.

SUMMARY OF THE INVENTION

[0015] In accordance with the purpose of the present invention as broadly described herein, there is provided systems and methods for improving quality of synthesized speech by using a voicing index to control the speech coding process.

[0016] According to one embodiment of the present invention, a voicing index is used to control and improve ABS type speech coding, which indicates the periodicity degree of the speech signal. The periodicity degree can significantly vary for different voiced speech segments, and this variation causes an unstable perceptual quality in analysis-by-synthesis type speech coding, such as CELP.

[0017] The voicing index can be used to improve the quality stability by controlling encoder and/or decoder, for example, in the following areas: (a) fixed-codebook short-term enhancement including the spectrum tilt, (b) perceptual weighting filter, (c) sub-fixed codebook determination, (d) LPC interpolation, (e) fixed-codebook pitch enhancement, (f) post-pitch enhancement, (g) noise injection into the high-frequency band at decoder, (h) LTP Sinc window, (i) signal decomposition, etc. In one embodiment for CELP speech coding, the voicing index may be based on a normalized pitch correlation.

[0018] These and other aspects of the present invention will become apparent with further reference to the drawings and specification, which follow. It is intended that all such additional systems, methods, features and advantages be included within this description, be within the scope of the present invention, and be protected by the accompanying claims.

BRIEF DESCRIPTION OF DRAWINGS

[0019] FIG. 1 is an illustration of the frequency domain characteristics of a sample speech signal.

[0020] FIG. 2 is an illustration of a voicing index classification available to both the encoder and the decoder.

[0021] FIG. 3 is an illustration of a basic CELP coding block diagram.

[0022] FIG. 4 is an illustration of a CELP coding process with an additional adaptive weighting filter for speech enhancement in accordance with an embodiment of the present invention.

[0023] FIG. 5 is an illustration of a decoder implementation with post filter configuration in accordance with an embodiment of the present invention.

[0024] FIG. 6 is an illustration of a CELP coding block diagram with several sub-codebooks.

[0025] FIG. 7A is an illustration of sampling for creation of a Sinc window.

[0026] FIG. 7B is an illustration of a Sinc window.

DETAILED DESCRIPTION

[0027] The present application may be described herein in terms of functional block components and various processing steps. It should be appreciated that such functional blocks may be realized by any number of hardware components and/or software components configured to perform the specified functions. For example, the present application may employ various integrated circuit components, e.g., memory elements, digital signal processing elements, transmitters, receivers, tone detectors, tone generators, logic elements, and the like, which may carry out a variety of functions under the control of one or more microprocessors or other control devices. Further, it should be noted that the present application may employ any number of conventional techniques for data transmission, signaling, signal processing and conditioning, tone generation and detection and the like. Such general techniques that may be known to those skilled in the art are not described in detail herein.

[0028] Voicing index is traditionally one of the important indexes sent to the decoder for Harmonic speech coding. The voicing index generally represents the degree of periodicity and/or periodic harmonic band boundary of voiced speech. Voicing index is traditionally not used in CELP coding systems. However, embodiments of the present invention use the voicing index to provide control and improve the quality of synthesized speech in a CELP or other analysis-by-synthesis type coder.

[0029] FIG. 1 is an illustration of the frequency domain characteristics of a sample speech signal. In this illustration, the spectrum domain in the wideband extends from slightly above 0 Hz to around 7 kHz. Although the highest possible frequency in the spectrum ends at 8 kHz (i.e. Nyquist folding frequency) for a speech signal sampled at 16 kHz, this illustration shows that the energy is almost zero in the area between 7.0 kHz to 8 kHz. It should be apparent to those of skill in the arts that the ranges of signals used herein are for illustration purposes only and that the principles expressed herein are applicable to other signal bands.

[0030] As illustrated in FIG. 1, the speech signal is quite harmonic at lower frequencies, but at higher frequencies the speech signal does not remain as harmonic because the probability of having noisy speech signal increases as the frequency increases. For instance, in this illustration the speech signal exhibits traits of becoming noisy at the higher frequencies, e.g., above 5.0 kHz. This noisy signal makes waveform matching at higher frequencies very difficult. Thus, techniques like ABS coding (e.g. CELP) becomes unreliable if high quality speech is desired. For example, in a CELP coder, the synthesizer is designed to match the original speech signal by minimizing the error between the original speech and the synthesized speech. A noisy signal is unpredictable thus making error minimization very difficult.

[0031] Given the above problem, embodiments of the present invention use a voicing index which is sent to the decoder, from the encoder, to improve the quality of speech synthesized by an ABS type speech coder, e.g., CELP coder.

[0032] The voicing index, which is transmitted by the encoder to the decoder, may represent the periodicity of the voiced speech or the harmonic structure of the signal. In another example embodiment, the voicing index may be represented by three bits thus providing up to eight classes of speech signal. For instance, FIG. 2 is an illustration of a voicing index classification available to both the encoder and the decoder. In this illustration, index 0 (i.e. "000") may indicate background noise, index 1 (i.e. "001") may indicate noise-like or unvoiced speech signal, index 2 (i.e. "010") may indicate irregular voiced signal such as voiced signal during onset, and indices 3-7 (i.e. "011" to "111") could each indicate the periodicity of the speech signals. For instance, index 3 ("011") may represent the least periodic signal and index 7 ("111") may indicate the most periodic signal.

[0033] The voicing index information can be transmitted by the encoder as part of each encoded frame. In other words, each frame may include the voicing index bits (e.g. three bits), which indicate the periodicity degree of that particular frame. In one embodiment, the voicing index for CELP may be based on a normalized pitch correlation parameter, Rp, and may be derived from the following equation: 10 log(1-Rp).sup.2, where -1.0<Rp<1.0.

[0034] In one example, the voicing index may be used for fixed codebook short-term enhancement, including the spectrum tilt. FIG. 3 is an illustration of a basic CELP coding block diagram. As illustrated, the CELP coding block 300 comprises the Fixed Codebook 301, gain block 302, Pitch filter block 303, and LPC filter 304. CELP coding block 300 further comprises comparison block 306, Weighting Filter block 320, and Mean Squared Error (MSE) computation block 308.

[0035] The basic idea behind CELP coding is that Input Speech 307 is compared against the synthesized output 305 to generate error 309, which is the mean squared error. The computation continues in a closed loop sense with selection of a new coding parameters until error 309 is minimal.

[0036] On the receiving side, the decoder synthesizes the speech using similar blocks 301-304 (see FIG. 5). Thus, the encoder passes information to the decoder as needed to select the proper codebook entry, gain, and filters, . . . , etc . . .

[0037] In a CELP speech coding system, when the speech signal is more periodic, the pitch filter (e.g. 303) contribution is heavier than the fixed codebook (e.g. 301) contribution. As a result, an embodiment of the present invention may use the voicing index to place more focus in the high frequency region by implementing an adaptive high pass filter, which is controlled by the value of the voicing index. An architecture such as the one shown in FIG. 4 may be implemented. For instance, Adaptive Filter 310 could be an adaptive filter emphasizing the power in the high frequency region. In the illustration, the weighting filter 420 may also be an adaptive filter for improving the CELP coding process.

[0038] On the decoder side, the voicing index may be used to select the appropriate Post Filter 520 parameters. FIG. 5 is an illustration of the decoder implementation with post filter configuration. In one or more embodiments, Post Filter 520 may have several configurations saved in a table, which may be selectable using information in the voicing index.

[0039] In another example, the voicing index may be used in conjunction with the perceptual weighting filter of CELP. The perceptual weighting filter may be represented by Adaptive filter 420 of FIG. 4, for example. As is well known, waveform matching minimizes the error in the most important portion (i.e. the high energy portion) of the speech signal and ignores low energy area by performing a mean squared error minimization. Embodiments of the present invention use an adaptive weighting process to enhance the low energy area. For instance, the voicing index may be used to define the aggressiveness of the weighting filter 420 depending on the periodicity degree of the frame.

[0040] In yet another embodiment, as illustrated in FIG. 6, the voicing index may be used to determine the sub-fixed codebook. There are possibly several sub-codebooks for the fixed codebook, for example, one sub-codebook 601 with less pulses but higher position resolution, one sub-codebook 602 with more pulses but lower position resolutions, and a noise sub-codebook 603. Therefore, if the voicing index indicates a noisy signal, then the sub-codebook 602 or noisy sub-codebook 603 can be used; if the voicing index does not indicate a noisy signal, then one of the sub-codebooks (e.g. 601 or 602) may be used depending on the degree of periodicity of the given frame. Note that the gain block (codebook) 302 may also be applied individually to each sub-codebook in one or more embodiments.

[0041] Further, the voicing index may be used in conjunction with the LPC interpolation. For example, during linear interpolation, the previous LPC is equally important as the current LPC if the location of the interpolated LPC is at the middle between the previous one and the current one. Thus, if the voicing index, for example, indicates that the previous frame was unvoiced and the present frame is voiced, then during the LPC interpolation, the LPC interpolation algorithm may favor the current frame more than the previous

[0042] The voicing index may also be used for fixed codebook pitch enhancement. Typically, the previous pitch gain is used to perform pitch enhancement. However, the voicing index provides information relating to the current frame and, thus, could be a better indicator than the previous pitch gain information. The magnitude of the pitch enhancement may be determined based on the voicing index. In other words, the more periodic the frame (based on the voicing index value), the higher the magnitude of the enhancement. For example, the voicing index may be used in conjunction with the U.S. patent application Ser. No. 09/365,444, filed Aug. 2, 1999, specification of which is incorporated herein by reference, to determine the magnitude of the enhancements in the bi-directional pitch enhancement system defined therein.

[0043] As a further example, the voicing index may be used in place of pitch gain for post pitch enhancement. This is advantageous, since, as discussed above, the voicing index may be derived from a normalized pitch correlation value, i.e. Rp, which is typically between 0.0 and 1.0; however, pitch gain may exceed 1.0 and can adversely affect the post pitch enhancement process.

[0044] As another example, the voicing index may also be used to determine the amount of noise that should be injected in the high frequency band at the decoder side. This embodiment may be used when the input speech is decomposed into a voiced portion and a noise portion as discussed in pending U.S. patent application Ser. No. ______, filed concurrently herewith, entitled "SIGNAL DECOMPOSITION OF VOICED SPEECH FOR CELP SPEECH CODING", specification of which is incorporated herein by reference.

[0045] The voicing index may also be used to control modification of the Sinc window. The Sinc window is used to generate an adaptive codebook contribution vector, i.e. LTP excitation vector, with fractional pitch lag for CELP coding. In wideband speech coding, it is known that strong harmonics appear in the low frequency area of the band and the noisy signals appear in the high frequency area.

[0046] Long-term prediction or LTP produces the harmonics by taking a previous excitation and copying it to a current subframe according to the pitch period. It should be noted that if a pure copy of the previous excitation is made, then the harmonic is replicated all the way to the end spectrum in the frequency domain. However, that would not be an accurate representation of a true voice signal and especially not in wideband speech coding.

[0047] In one embodiment, for wideband speech signal when the previous signal is used to represent the current signal, an adaptive low pass filter is applied to the Sinc interpolation window, since there is a high probability of noise in high frequency area.

[0048] In CELP coding, the fixed codebook contributes to coding of the noisy or irregular portion of the speech signal, and a pitch adaptive codebook contributes to the voice or regular portion of the speech signal. The adaptive codebook contribution is generated using a Sinc window, which is used due to the fact that the pitch lag can be fractional. If the pitch lag were an integer, one excitation signal could be copied to the next; however, because the pitch lag is fractional, straight copying of the previous excitation signal would not work. After the Sinc window is modified, the straight copying would not work even for integer pitch lag. In order to generate pitch contribution, several samples are taken, as shown in FIG. 7A, which are weighted and then added together, where the weights for the samples is called the Sinc window, which originally has a symmetric shape, as shown in FIG. 7B. The shape in practice depends on the fractional portion of the pitch lag and the adaptive lowpass filter applied to the Sinc window. Application of the Sinc window is similar to convolution or filtering, but the Sinc window is a non-causal filter. In the representation shown below, a window signal w(n) is convoluted with the signal s(n) in the time domain, which is an equivalent representation to spectrum of the window W(w) multiplied by the spectrum of the signal S(w) in the frequency domain:

U.sub.ACB(n)=w(n)*s(n).rarw..fwdarw.W(w)S(w).

[0049] According to the above representation, low passing of the Sinc window is equivalent to low passing the final adaptive codebook contribution (UACB (n)) or excitation signal; however, low passing of the Sinc window is advantageous due to the fact that the Sinc window is shorter than the excitation. Thus, it is easier to modify the Sinc window than the excitation; further more, the filtering of the Sinc window can be pre-calculated and memorized.

[0050] In one embodiment of the present invention, the voicing index may be used to provide information to control modification of the low pass filter for the Sinc window. For instance, the voicing index may provide information as to whether the harmonic structure is strong or weak. If the harmonic structure is strong, then a weak low pass filter is applied to the Sinc window, and if the harmonic structure is weak, then a strong low pass filter is applied to the Sinc window.

[0051] Although the above embodiments of the present application are described with reference to wideband speech signals, the present invention is equally applicable to narrowband speech signals.

[0052] The methods and systems presented above may reside in software, hardware, or firmware on the device, which can be implemented on a microprocessor, digital signal processor, application specific IC, or field programmable gate array ("FPGA"), or any combination thereof, without departing from the spirit of the invention. Furthermore, the present invention may be embodied in other specific forms without departing from its spirit or essential characteristics. The described embodiments are to be considered in all respects only as illustrative and not restrictive.

* * * * *


uspto.report is an independent third-party trademark research tool that is not affiliated, endorsed, or sponsored by the United States Patent and Trademark Office (USPTO) or any other governmental organization. The information provided by uspto.report is based on publicly available data at the time of writing and is intended for informational purposes only.

While we strive to provide accurate and up-to-date information, we do not guarantee the accuracy, completeness, reliability, or suitability of the information displayed on this site. The use of this site is at your own risk. Any reliance you place on such information is therefore strictly at your own risk.

All official trademark data, including owner information, should be verified by visiting the official USPTO website at www.uspto.gov. This site is not intended to replace professional legal advice and should not be used as a substitute for consulting with a legal professional who is knowledgeable about trademark law.

© 2024 USPTO.report | Privacy Policy | Resources | RSS Feed of Trademarks | Trademark Filings Twitter Feed