EP0212323A2 - Verfahren und Einrichtung von Signalumwandlung und ihre Anwendung zur Signalverarbeitung - Google Patents
Verfahren und Einrichtung von Signalumwandlung und ihre Anwendung zur Signalverarbeitung Download PDFInfo
- Publication number
- EP0212323A2 EP0212323A2 EP86110212A EP86110212A EP0212323A2 EP 0212323 A2 EP0212323 A2 EP 0212323A2 EP 86110212 A EP86110212 A EP 86110212A EP 86110212 A EP86110212 A EP 86110212A EP 0212323 A2 EP0212323 A2 EP 0212323A2
- Authority
- EP
- European Patent Office
- Prior art keywords
- signal
- transformation
- generating
- histogram
- reference position
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 230000009466 transformation Effects 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000012545 processing Methods 0.000 title claims abstract description 38
- 238000005314 correlation function Methods 0.000 claims abstract description 12
- 230000006870 function Effects 0.000 claims description 18
- 230000002194 synthesizing effect Effects 0.000 claims description 3
- 238000005311 autocorrelation function Methods 0.000 claims 1
- 230000006835 compression Effects 0.000 abstract description 29
- 238000007906 compression Methods 0.000 abstract description 29
- 230000015572 biosynthetic process Effects 0.000 abstract description 19
- 238000003786 synthesis reaction Methods 0.000 abstract description 17
- 230000008569 process Effects 0.000 description 12
- 238000010586 diagram Methods 0.000 description 9
- 238000004891 communication Methods 0.000 description 6
- 230000002123 temporal effect Effects 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 230000003595 spectral effect Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000011218 segmentation Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 210000004704 glottis Anatomy 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000010420 art technique Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000001747 exhibiting effect Effects 0.000 description 1
- 210000000867 larynx Anatomy 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
Definitions
- the present invention relates to signal processing techniques and particularly to a method and apparatus for generating a signal transformation which retains a substantial part of the informational content of the original signal.
- Audition is a temporally-based sense, whereas vision is primarily spatially-based.
- temporal events as brief as a few thousandths of a seond are critical for making simple phonetic or word-based distinctions, such as between "pole” and “bowl,” or “tow down” and “towed down.”
- the ear In addition to its highly developed temporal-resolving power, the ear also exhibits excellent spectral resolution and dynamic range. Exactly how the ear exhibits such fine spectral resolution without sacrificing temporal resolution remains a mystery. If more were understood about how the ear works, such knowledge could be applied to speech technologies to improve the performance of speech reocognizers and coding devices.
- Satisfactory temporal information from an acoustic speech signal is important for performing certain types of speech processing, e.g., speech segmentation in phonetically-based recognition systems. Likewise, satisfactory spectral resolution of the speech signal is important for other types of speech processing such as speech compression and synthesis.
- Current state-of-the-art digital signal processors cannot support such diverse speech procesing applications because all suffer the classical trade-off of frequency versus time resolution -- processors exhibiting good frequency resolution have poor temporal resolution, and vice versa.
- a digital signal processor having good spectral and temporal resolution would be a tremendous benefit to the speech industry because it would allow a single processing system to approximate the performance characteristics of the ear itself.
- An ideal digital signal processor for use in speech processing would provide a unique representation or "transformation" of the speech signal from which all relevent speech features could be derived. As is well known in the art, these features include voice pitch, amplitude envelope, spectrum and degree of voicing. It is presently common in speech systems to use totally different representations of the speech signal to abstract these features, depending on the type of speech processing application being implemented, and the capabilities of the processor carrying out the implementation.
- a method and apparatus for generating a signal transformation which retains a substantial part of the informational content of the original signal required for speech processing applications.
- such applications include speech compression, speech synthesis and speech segmentation.
- the transformation is generated by converting all or part of the original signal into a sequence of data samples, selecting a reference position along a first sub-part of the sequence, and generating a histogram for the reference position according to a correlation function. Thereafter, a reference position along a second sub-part of the sequence is selected, and an additional histogram is generated for this reference position.
- the plurality of histograms generated in this fashion comprise the transforamtion.
- the transformation is then used as the signal itself in signal processing applications.
- the transformation comprises a plurality of "weighted” histograms, each having a predetermined number of positions "d max " and being derived from a general class of "differencing" functions of the form:
- the present invention also includes suitable apparatus for deriving "weighted" histograms according to expression (l) above.
- the data samples representing a first sub-part of the sequence are applied sequentially through a differencing correlator having first and second sections, the output of the first section connected to the input of the second section through a temporary storage area.
- a new data sample is then applied to the first correlator section and the remaining samples therein shift by one position.
- a data sample is thereby removed from the first correlator section to the temporary storage area for a first iteration of the differencing calculation.
- the magnitudes of the data samples in the second correlator section are then differenced with the magnitudes of positionally-corresponding data samples in the first correlator section, and absolute values of these differences are then calculated to produce "even” values which are then added to the histogram for the reference position. Thereafter, the data sample in the temporary storage area (for the first iteration) is applied to the second correlator section and the remaining samples therein shifted by one position. The "differencing,” “absolute value” and “summation” steps are then repeated to produce "odd” values of the histogram.
- This operation represents one complete cycle of the histogram, and is repeated “scnt” times according to expression (l) to complete the formation of the histogram for the reference position along the first sub-part of the data sample sequence.
- the process is then repeated for reference positions along other sub-parts of the sequence, each reference position preferably located a pitch period (or multiple thereof) apart, to form additional histograms.
- the plurality of "weighted" histograms comprise the transformation of the original signal. It has been found that transformations of the type disclosed herein retain a substantial part of the informational content of the original signal, with only the phase information removed. The transformation is then used according to the invention by various speech or other signal processing applications. For example, to form a compressed version of the original signal, a predetermined portion of each histogram generated every other pitch period of the signal is then stored. Conversely, to implement speech synthesis, the compressed transformation is reconstructed. In neither case, however, does the method require costly and complex conversion of the signal between the time and frequency domains, as in the prior art.
- a special purpose microprocessor is also provided which, under the control of a software routine, generates the histograms.
- a general purpose microprocessor is also provided for effecting overall system control, and for controlling specialized processing applications, such as signal compression and synthesis. These microprocessors operate concurrently in a full duplex digital transceiver configuration to facilitate real-time communications to and from the system.
- FIGURE lA discloses a correlator structure for generating histograms according to the present invention.
- a plurality of such histograms form a so-called "transformation" of the signal which retains a substantial part of the informational content thereof.
- the technique is explained below with an emphasis on human speech as the source waveform. It should be appreciated, however, that the method and apparatus of the present invention is fully applicable to all types of analog and digital source signals, regardless of how such signals are derived.
- histograms are generated according to one of a plurality of correlation functions.
- these functions are so-called “differencing" functions which operate to produce weighted histograms, having d max positions, of the form:
- the resulting transformation (which comprises a plurality of such histograms) retains a substantial part of the informational content of the original signal, with only the phase information removed.
- the transformation is then used as the original signal itself, thus obviating costly and complicated conversion of the signal, or conversion of features extracted therefrom, between the time and frequency domains prior to and/or following processing.
- histograms comprising the signal transformation are generated by different types of known "auto” or “cross” correlation functions of the general form: If “u” is identical to “v” in expression (2), “histogram(d)” reduces to the well-known auto-correlation funciton. If “u” is not identical to "v”, expression (2) represents a cross-correlation function.
- the correlator 20 includes a first section 24 having a top entrance 26 and a top exit 28.
- the correlator 20 also includes a second section 30 having a bottom entrance 32 and bottom exit 34.
- the top exit 28 of the first correlator section 24 is connected to the bottom entrance 32 of the second correlator section 30.
- the first correlator section 24 includes a temporary storage area 25 adjacent the exit 28 for temporarily storing a data sample, for the reasons to be described below.
- the speech waveform l0 is shown in analog form inside the correlator 20. It should be appreciated, however, that in the actual method and apparatus of the present invention, the speech waveform l0 is first converted into a sequence of digital data samples. As seen in FIGURE lA, a sub-part of the speech waveform l0 is passed sequentially through the first correlator section 24, through the temporary storage area 25, and then into the second correlator section 30. As each new data sample enters the top entrance 26 of the first section 24, the remaining data samples in this correlator section are each shifted one position towards the exit 28. A data sample is then removed to the temporary storage area 25 and held there for a predetermined time period to be described.
- data samples in the second correlator section 30 are then differenced with positionally-corresponding data samples in the first correlator section 24.
- positionally-corresponding refers to data samples in the respective correlator sections at any moment in time located the same distance from the ends of the correlator. Therefore, the data sample 38 located adjacent the top entrance 26 of the first section 24 "positionally-corresponds" to the data samle 39 located adjacent the bottom exit 34 of the second section 30.
- the positions "d1,d3,d5" represent the “odd” values of the histogram with the positions "d2,d4,d6" representing the even values thereof.
- the length of the histogram 40 is normally two times the length of each correlator section. Also, the length of each sub-part of the data sample sequence is typically greater than "d max .”
- the SAMDF scheme begins at step 4l (assuming the correlator is filled with a portion of a first sub-part of the sequence) by initializing the d max positions of the histogram to zero.
- step 42 a new data sample is moved into the first correlator section 24 and the remaining samples therein are shifted by one position. Step 42 therefore moves a data sample into the temporary storage area 25 for the first iteration of the calculation.
- step 43 differences in magnitude between corresponding samples in the correlation sections are calculated. In particular, the magnitude of the first sample in the correlator section 24 adjacent the top entrance 26 thereof is differenced from the magnitude of the last sample in the correlator section 30 adjacent the bottom exit 34 thereof.
- step 44 the absolute values of the differences calculated in step 43 for each position in the correlator are then determined and in step 45, added to the summation to produce the "even" positions "d2,d4,d6" of the histogram 40. Thereafter, an inquiry 46 is made to determine if a complete cycle of the histogram formation has been run. If not, the routine branches back to step 47, where the data sample in the temporary storage area 25 (received during the first iteration) is shifted into the second correlator section 30 and the remaining samples therein shifted by one posiiton.
- Steps 43-45 are then repeated to increment the "odd” values "d1,d3,d5" of the histogram 40. If the result of inquiry 46 is positive, a test 48 is then made to see if "scnt" samples have been applied to the temporary storage area 25; if not, the routine branches back to step 42, and the method repeats as described above. If the result of inquiry 48 is positive, the histogram may be normalized (for examle, by dividing each histogram value by "scnt") to produce the completed histogram for the first sub-part of the data sample sequence originally applied through the correlator sections. This process is then repeated in step 49 for additional sub-parts of the signal (applied through the correlator sections) to produce additional histograms comprising the signal transformation.
- reference positions along the sample sequence are separated by a pitch period, or multiple thereof, of the signal. Also, when the SAMDF process of FIGURE 2 is implemented, the data sample moved into the temporary storage area 25 after "scnt/2" cycles represents the reference position along the sub-part of the sequence.
- FIGURE 3 a schematic block diagram is shown of a speech system 50 designed to provide the capabilities needed to produce the signal transformation according to the present invention, and also to provide the capabilities needed for using this transformation in speech processing applications.
- system 50 will be described in the context of a speech development system.
- System 50 is fully capable of interfacing with all types of signal processing applications, and the reference to speech-related applications herein is not meant to be limiting.
- the speech system 50 includes a general purpose microprocessor 52 which has several input/output (I/O) devices tied thereto.
- Speech system 50 includes a pair of serial digital communication links 54 and 56 connected to the general purpose microprocessor 52 through universal asynchronous receiver/transmitters (UART's) 58 and 60, respectively. Such devices are well known and serve to interface the parallel word-based microprocessor 52 to the serial bit communication links 54 and 56.
- Speech system 50 also includes an analog input path 62 to the general purpose microprocessor 52 comprising bandpass filter 64 and analog-to-digital (A/D) converter 66.
- An analog output path 68 is also provided from the general purpose microprocessor 52 comprising low pass filter 70 and digital-to-analog (D/A) converter 72.
- An analog speech waveform is applied to the analog input path 62, where it is band limited by the filter 64, and digitized by the A/D converter 66.
- the digitized version of the speech waveform may then be transmitted over one of the digital serial communication links 54 or 56 to a remote system similar to the speech development system 50.
- the general purpose microprocessor 52 includes an associated random access memory (RAM) 5l for storing application programs and data, and also a read only memory (ROM) 53 for storing operating programs which control the microprocessor 52.
- RAM random access memory
- ROM read only memory
- the speech system 50 includes a special purpose microprocessor 74 which, under the control of a software routine, carries out the SAMDF process of FIGURE 2.
- Special purpose microprocessor 74 includes an associated control store 76 for storing this routine, and an associated random access memory (RAM) 78 for communicating with the general purpose microprocessor 52.
- General purpose microprocessor 52 passes digital data samples from the analog input path 62 into the RAM 78 and these samples are then processed in the special purpose microprocessor 74 under the control of a routine stored in control store 76. The resulting transformation of the speech waveform is then stored back in the RAM 78.
- the contents of RAM 78 are then read by general purpose microprocessor 52 without interrupting the continued processing of additional portions of the waveform by special purpose microprocessor 74.
- special purpose microprocessor 74 operates concurrently with general purpose microprocessor 52 to enable the microprocessor 74 to carry out the SAMDF correlation calculations while the microprocessor 52 provides other system control functions.
- Speech system 50 provides full duplex digital transceiver operation for facilitating real-time communications to and from the system.
- control programs are down loaded into the RAM 5l associated with the general purpose microprocessor 52. These programs control the microprocessor 52 to down load the SAMDF routine into the control store 76 associated with special purpose microprocessor 74.
- the speech waveform is then received over the analog input path 62 and processed as described above.
- this transformation is then used as the signal itself by speech processing applications such as compression, synthesis and segmentation.
- FIGURE 4 a flowchart diagram is shown of a signal compression routine of the present invention which operates on the signal transformation to produce a compressed version of the original speech signal.
- the object of speech compression is to represent analog speech with as few digital bits as possible.
- Prior art techniques such as linear predictive coing (LPC) are based on the successful extraction of voice parameters from the speech signal and accurate voiced/unvoiced decisions.
- LPC and other prior art formant coding techniques provide effective speech signal compression in some applications, such techniques break down in noisy environments and when the speech signal is sampled at low data rates.
- the compression technique of the present invention takes advantage of certain informational redundancies inherent in the signal, which are also present in the signal transformation generated by the SAMDF process.
- a first source of informational redundancy in a speech signal exists because the speech waveform is substantially similar in any two contiguous pitch periods. Therefore, the storing of every other pitch period of the speech waverform represents a way to compress speech by a factor of 2:l.
- a second source of informational redundancy in the speech waveform is based on the notion that speech is normally a bipolar, approximately symmetrical waveform about an arbitrary reference level. If the waveform is rectified and zeros are eliminated therefrom, then the original waveform can be compressed by another factor of two, or by a total factor of 4:l.
- a third source of informtional redundancy within the speech waveform is inherent in the way voiced signals are produced by the larynx.
- the glottal source has two phases, an open phase and a closed phase, and the resonances of the vocal tract are best represented in the speech waveform while the glottis is closed. Therefore, because the glottis is closed roughly 50% of the pitch period, only half of the speech waveform is carrying information during the pitch period itself. Accoridingly, the storage of only one-half of a pitch period represents a way to compress the speech waveform by another factor of two, for a total compression ratio of 8:l.
- the SAMDF process correlates positive and negative phases of an input speech waveform, resulting in the histogram 40 with minimas corresponding to half cycles from the waveform. Accordingly, use of the SAMDF correlation process exploits the positive-to-negative cycle redundancy inherent in the speech waveform. Moreover, as also seen in FIGURE lB, the SAMDF process produces a highly symmetrical histogram 40, such that storage of only one-half of a pitch period represented in the histogram is required. Storage of one-half of a pitch period thus exploits the redundancy in the waveform resulting from the physical characteristics of the glottal source.
- the histogram 40 is generated by the correlator 20 by selecting reference positions along the data sample sequence every other pitch period, such that the histogram represents an "averaged" correlation over two pitch periods.
- This feature of the invention thus exploits the pitch period-to-pitch period redundancy inherent in the input speech waveform resulting in a total compression ratio of 8:l.
- the compression routine in FIGURE 4 begins at instruction 80 wherein data samples are moved into the RAM 78, where they are processed by the special purpose microprocessor 74. As discussed above with respect to FIGURE 3, the data samples are obtained from conversion of an analog sound wave by the A/D converter 66.
- the SAMDF correlation is then carried out in step 80 by the special purpose microprocessor 74 of FIGURE 3 under the control of a software routine stored in the associateed control store 76.
- a check 84 is made to determine whether or not a completed histogram (as desribed with respect to FIGURE 2) is ready for further processing. If the histogram is not ready, control returns to step 80, and another data sample is moved into the RAM 78 as previously described by step 42 in FIGURE 2. When the histogram is ready, i.e., the test in step 84 is positive, the histogram is moved from the RAM 78 to the RAM 5l in step 88, so that it can be processed by the general purpose microprocessor 52.
- step 90 the signal compression routine continues in step 90 to determine whether it is time to track the pitch of the waveform. If the result of the inquiry 90 is negative, i.e., if the time interval for tracking pitch has not elapsed, the routine branches to step 92 wherein one-half of the pitch period is encoded from the histogram, preferably by using two-bit adaptive differential pulse code modulation (ADPCM).
- ADPCM adaptive differential pulse code modulation
- Encoding of the compressed waveform incurs some overhead; for example, the frequency, or length of the pitch period, must be stored with the encoded waveform.
- the system preferably tracks the pitch of the input speech signal only at certain time intervals, which may vary from as frequently as each pitch period to as infrequently as several pitch periods.
- step 94 determines the pitch period.
- step 96 the routine continues by feeding the pitch period determined in step 94 back to the special purpose microprocessor 74.
- step 98 the pitch is encoded with the routine continuing in step l00 to calculate the maximum amplitude in the pitch period, or gain factor.
- step l02 the gain factor is then encoded, preferably using a log(base 2) representation, and the routine continues with step 92 as discussed above.
- step 92 an inquiry l04 is made to determine whether compression is complete. If not, the routine recycles back to step 80 wherein additional portions of the speech signal are digitized and the compression routine continues as described above. If compression is completed, then the routine terminates at step l06.
- the first analysis performed on the histogram is pitch extraction.
- Pitch is determined by examining minimas in the histogram, analyzing for harmonic relations and selecting a first pitch trough. This value is then used to control the amount of time over which the next histogram will be summed.
- An effect of the process is to produce highly symmetrical histograms, so that only one-half of the pitch period in the histogram need be stored. This provides a 2:l factor of compression in the speech waveform.
- histograms are output every other pitch period to provide another 2:l factor of compression, or a total compression ratio of 4:l.
- the encoding step 92 codes the histograms using a two-bit ADPCM scheme modulation scheme. This represents another factor of four compression on the original eight-bit digitized waveform. Thus, the total compression ratio of the technique is l6:l.
- FIGURE 5 a flowchart diagram of a signal synthesis routine of the present invention is shown.
- this routine operates on the SAMDF signal transformation generated by the special purpose microprocessor, and in particular on the transformation as compressed by the compression routine set forth in FIGURE 4.
- Synthesis begins with instruction ll0, wherein the routine is initialized by receiving data representing the compressed speech signal.
- the routine continues with inquiry ll2 which determines whether the pitch period should be read. If the result of the inquiry ll2 is positive, the routine continues in step ll4 to read the pitch period from the bitstream data received over one of digital serial communication links of FIGURE 3. Thereafter, the gain factor is read in step ll6 from the bitstream data.
- step ll8 the method continues in step ll8, wherein one-half of the pitch period for the compressed segment is expanded from the bitstreamdata.
- step l20 the pitch of the segment is interpolated, as is the gain factor in step l22.
- step l24 the routine continues in step l24 to synthesize the pitch period(s).
- step l26 the routine enters inquiry l26 to determine whether the speech waveform synthesis has been completed. If not, the method returns to step ll0 to get data to synthesize the next segment. If the synthesis is complete, the routine terminates at step l28.
- synthesis occurs in four steps.
- the stored encoded pitch and gain factors are first read and decoded.
- the second step consists of a simple expansion of the histogram from ADPCM to pules code modulation (PCM) format, which is accomplished in step ll8 of FIGURE 5.
- PCM pules code modulation
- the reconstructed waveform is reflected in step l24 to form the pitch period.
- the fourth and final step is to repeat the pitch period, with the process then repeated for each subsequent portion of the compressed speech waveform.
- the present invention provides a method and apparatus for generating a transformation of a signal waveform useful in speech processing for example, compression and synthesis.
- This transformation retains the informational content of the original signal and therefore is used directly to represent the signal.
- the "use" of the signal transformation as the signal itself obviates costly and complex computational algorithms for converting the signal (or features thereof) between the time and frequency domains prior to and following the signal processing application(s).
- a special purpose microprocessor is provided to run a software routine for generating the transformation by calculating a sliding average magnitude difference funciton (SAMDF) histogram for continuous segments of the speech waveform.
- SAMDF sliding average magnitude difference funciton
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Analogue/Digital Conversion (AREA)
- Transmission Systems Not Characterized By The Medium Used For Transmission (AREA)
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US77053085A | 1985-08-29 | 1985-08-29 | |
US770530 | 2001-01-25 |
Publications (2)
Publication Number | Publication Date |
---|---|
EP0212323A2 true EP0212323A2 (de) | 1987-03-04 |
EP0212323A3 EP0212323A3 (de) | 1988-03-16 |
Family
ID=25088868
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP86110212A Withdrawn EP0212323A3 (de) | 1985-08-29 | 1986-07-24 | Verfahren und Einrichtung von Signalumwandlung und ihre Anwendung zur Signalverarbeitung |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP0212323A3 (de) |
JP (1) | JPS6252600A (de) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0411290A2 (de) * | 1989-08-04 | 1991-02-06 | Scott Instruments Corporation | Verfahren und Einrichtung zur Extraktion informationstragender Teile eines Signals zur Erkennung von verschiedenen Formen eines gleichen Musters |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO1998042077A1 (fr) * | 1997-03-18 | 1998-09-24 | Nippon Columbia Co., Ltd. | Detecteur de distorsion, correcteur de distorsion, et procede de correction de distorsion pour signal audio numerique |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4004096A (en) * | 1975-02-18 | 1977-01-18 | The United States Of America As Represented By The Secretary Of The Army | Process for extracting pitch information |
FR2337393A1 (fr) * | 1975-12-29 | 1977-07-29 | Dialog Syst | Procede et appareil d'analyse et de reconnaissance de parole |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE2939077C2 (de) * | 1979-09-27 | 1987-04-23 | Philips Patentverwaltung Gmbh, 2000 Hamburg | Verfahren und Anordnung zum Bestimmen charakteristischer Werte aus einem zeitbegrenzten Geräuschsignal |
-
1986
- 1986-07-24 EP EP86110212A patent/EP0212323A3/de not_active Withdrawn
- 1986-08-29 JP JP20192286A patent/JPS6252600A/ja active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4004096A (en) * | 1975-02-18 | 1977-01-18 | The United States Of America As Represented By The Secretary Of The Army | Process for extracting pitch information |
FR2337393A1 (fr) * | 1975-12-29 | 1977-07-29 | Dialog Syst | Procede et appareil d'analyse et de reconnaissance de parole |
Non-Patent Citations (2)
Title |
---|
JOURNAL OF THE AUDIO ENGINEERING SOCIETY, vol. 10, no. 2, April 1962, pages 163-166; M.R. SCHROEDER: "Correlation techniques for speech bandwidth compression" * |
SIGNAL PROCESSING, vol. 5, no. 6, November 193, pages 491-513, Elsevier Science Publishers B.V., Amsterdam, NL; E. AMBIKAIRAJAH et al.: "The time-domain periodogram algorithm" * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0411290A2 (de) * | 1989-08-04 | 1991-02-06 | Scott Instruments Corporation | Verfahren und Einrichtung zur Extraktion informationstragender Teile eines Signals zur Erkennung von verschiedenen Formen eines gleichen Musters |
EP0411290A3 (de) * | 1989-08-04 | 1994-02-09 | Scott Instr Corp |
Also Published As
Publication number | Publication date |
---|---|
EP0212323A3 (de) | 1988-03-16 |
JPS6252600A (ja) | 1987-03-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US4771465A (en) | Digital speech sinusoidal vocoder with transmission of only subset of harmonics | |
US4301329A (en) | Speech analysis and synthesis apparatus | |
EP0260053B1 (de) | Digitaler Vocoder | |
US4969193A (en) | Method and apparatus for generating a signal transformation and the use thereof in signal processing | |
CN102623015B (zh) | 可变速率语音编码 | |
US6345248B1 (en) | Low bit-rate speech coder using adaptive open-loop subframe pitch lag estimation and vector quantization | |
US4821324A (en) | Low bit-rate pattern encoding and decoding capable of reducing an information transmission rate | |
KR100298300B1 (ko) | 포만트유사도측정에의한피솔라를이용한음성파형부호화방식 | |
EP0361443A2 (de) | Verfahren und System zur Sprachcodierung unter Anwendung von Vektorquantisierung | |
GB2102254A (en) | A speech analysis-synthesis system | |
JPH0869299A (ja) | 音声符号化方法、音声復号化方法及び音声符号化復号化方法 | |
EP0726560A2 (de) | System zum Abspielen mit veränderbarer Geschwindigkeit | |
KR0173923B1 (ko) | 다층구조 신경망을 이용한 음소 분할 방법 | |
EP0459363B1 (de) | Sprachkodierer | |
US4945565A (en) | Low bit-rate pattern encoding and decoding with a reduced number of excitation pulses | |
EP1041541B1 (de) | Celp sprachkodierer | |
CA2261956A1 (en) | Method and apparatus for searching an excitation codebook in a code excited linear prediction (clep) coder | |
EP0813183A2 (de) | Sprachwiedergabesystem | |
US5822721A (en) | Method and apparatus for fractal-excited linear predictive coding of digital signals | |
EP0212323A2 (de) | Verfahren und Einrichtung von Signalumwandlung und ihre Anwendung zur Signalverarbeitung | |
Roucos et al. | A segment vocoder algorithm for real-time implementation | |
JPH05297895A (ja) | 高能率符号化方法 | |
JP3398968B2 (ja) | 音声分析合成方法 | |
Kim et al. | On a Reduction of Pitch Searching Time by Preprocessing in the CELP Vocoder | |
EP0402947B1 (de) | Einrichtung und Verfahren zur Sprachkodierung mit Regular-Pulsanregung |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
AK | Designated contracting states |
Kind code of ref document: A2 Designated state(s): AT BE CH DE FR GB IT LI LU NL SE |
|
PUAL | Search report despatched |
Free format text: ORIGINAL CODE: 0009013 |
|
AK | Designated contracting states |
Kind code of ref document: A3 Designated state(s): AT BE CH DE FR GB IT LI LU NL SE |
|
17P | Request for examination filed |
Effective date: 19880913 |
|
17Q | First examination report despatched |
Effective date: 19910620 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION IS DEEMED TO BE WITHDRAWN |
|
18D | Application deemed to be withdrawn |
Effective date: 19940111 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 732E |
|
RIN1 | Information on inventor provided before grant (corrected) |
Inventor name: SCOTT, BRIAN LEE Inventor name: NEWELL, JOHN MARK Inventor name: SMITH, LLOYD ALLEN Inventor name: GOODMAN, ROBERT GRAY |