WO2001080222A1 - Procede et dispositif de reconnaissance vocale, procede et dispositif de synthese vocale, support d'enregistrement - Google Patents
Procede et dispositif de reconnaissance vocale, procede et dispositif de synthese vocale, support d'enregistrement Download PDFInfo
- Publication number
- WO2001080222A1 WO2001080222A1 PCT/JP2001/003079 JP0103079W WO0180222A1 WO 2001080222 A1 WO2001080222 A1 WO 2001080222A1 JP 0103079 W JP0103079 W JP 0103079W WO 0180222 A1 WO0180222 A1 WO 0180222A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- data
- value
- timing
- amplitude
- correlation
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 47
- 238000001308 synthesis method Methods 0.000 title claims description 11
- 238000005070 sampling Methods 0.000 claims description 78
- 230000006870 function Effects 0.000 claims description 46
- 238000004364 calculation method Methods 0.000 claims description 19
- 230000005236 sound signal Effects 0.000 claims description 19
- 238000006243 chemical reaction Methods 0.000 claims description 17
- 230000008569 process Effects 0.000 claims description 16
- 230000002194 synthesizing effect Effects 0.000 claims description 8
- 238000003384 imaging method Methods 0.000 claims 1
- 238000010586 diagram Methods 0.000 description 27
- 238000005516 engineering process Methods 0.000 description 12
- 230000015572 biosynthetic process Effects 0.000 description 11
- 238000003786 synthesis reaction Methods 0.000 description 11
- 230000008859 change Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 8
- 230000004069 differentiation Effects 0.000 description 8
- 239000000203 mixture Substances 0.000 description 7
- 238000013500 data storage Methods 0.000 description 6
- 230000010363 phase shift Effects 0.000 description 5
- 238000004458 analytical method Methods 0.000 description 4
- 239000000284 extract Substances 0.000 description 2
- 238000012887 quadratic function Methods 0.000 description 2
- 108010076504 Protein Sorting Signals Proteins 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000001788 irregular Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 230000004044 response Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01L—MEASURING FORCE, STRESS, TORQUE, WORK, MECHANICAL POWER, MECHANICAL EFFICIENCY, OR FLUID PRESSURE
- G01L13/00—Devices or apparatus for measuring differences of two or more fluid pressure values
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/02—Feature extraction for speech recognition; Selection of recognition unit
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
Definitions
- the present invention relates to a speech recognition method and apparatus, a speech synthesis method and apparatus, and a recording medium storing a program for realizing these functions by software. This is related to speech synthesis technology that produces speech from data. Background art
- voice recognition voice processing interface in which a computer recognizes a voice generated by a human and automatically processes the voice.
- speech recognition technologies from word speech recognition that recognizes numbers and words to continuous speech recognition that understands meaning and content.
- speaker identification technology for identifying who generated the voice is also included in the speech recognition technology in a broad sense.
- speech synthesis technology for synthesizing and outputting speech from data such as text has been developed.
- text-to-speech synthesis by analyzing the text data of words composed of various characters such as kanji and English characters, it is possible to provide accents and intonations based on pre-set rules. , Synthesize voice.
- current speech recognition technology cannot recognize every uttered speech, and there is a limit to the degree of recognition. In other words, the same word may have different voices depending on the speaker, and the recognition accuracy may differ depending on the speaker. Also, as the number of vocabularies and speakers to be recognized increase, the ease of Nada of speech recognition will further increase.
- the generated synthesized sound has not yet escaped the range of the mechanical sound, and it has been an issue to improve the quality of the synthesized sound to obtain a synthesized sound closer to the real human voice. .
- the voice recognition method of the present invention differentiates an input voice signal, detects a point at which the differential value satisfies a predetermined condition as a sample point, and calculates a discrete amplitude data between each detected sample point and each sample point. Timing data representing a time interval is obtained, correlation data is generated using the amplitude data and the timing data, and the correlation data thus generated and correlations previously generated and accumulated for various sounds in the same manner.
- the input speech is recognized by comparing it with data.
- the input audio signal may be oversampled, and the oversampled data may be sampled at a time interval at a point where the differential value satisfies a predetermined condition.
- the speech recognition device of the present invention is obtained by A / D conversion means for AZD converting an input speech signal, differentiation means for differentiating digital data output from the A / D conversion means, and the differentiation means.
- Data generating means for detecting a point at which the differentiated value satisfies a predetermined condition as a sampling point, and generating amplitude data at each detected sampling point and timing data representing a time interval between each sampling point;
- a correlation calculating means for generating correlation data using the amplitude data and the timing data generated by the data generating means; a correlation data generated by the correlation calculating means; It is provided with a data collating means for recognizing the input voice by collating the correlation data stored in the recording medium.
- the correlation calculation means may perform a process of rounding the lower several bits of the correlation data.
- oversampling means for performing oversampling of the digital data output from the AZD conversion means using a clock having an even-numbered frequency, wherein the differential value of the oversampled data is a predetermined value.
- sampling may be performed at time intervals of points that satisfy.
- the voice synthesis method is characterized in that amplitude data and sample points at data points other than voice and a voice signal corresponding to the data, which are generated in advance, and whose differential value of the voice signal satisfies a predetermined condition.
- a set of timing data indicating the time interval between the two is associated with each other, and when a desired data is designated, the set of the amplitude data and the timing data associated with the designated data is used.
- the timing Interpolates between the amplitude data having a time interval indicated by evening to synthesize speech by obtaining interpolation data.
- the speech synthesis apparatus of the present invention also supports data other than speech.
- the set of the amplitude data and the timing data stored in the storage means in association with the specified data is stored when the desired data is specified.
- the computer-readable recording medium of the present invention executes the processing procedure of the speech recognition method described in claim 1 or the processing procedure of the speech synthesis method described in claim 17 on a computer.
- a program for causing a computer to function as each means described in Claim 9 or Claim 19 is recorded.
- the present invention comprises the above technical means, it is possible to provide an unprecedentedly simple speech recognition method and speech synthesis method using amplitude data and timing data at predetermined sample points.
- the correlation data instead of the amplitude data and the timing data as they are, it is possible to improve the speech recognition degree. Furthermore, by performing processing to round the lower few bits of the correlation data and oversampling processing of the audio signal, the audio Can be further improved.
- FIG. 1 is a block diagram illustrating a configuration example of the speech recognition device according to the first embodiment.
- FIG. 2 is a diagram illustrating the principle of speech recognition according to the present embodiment.
- FIG. 3 is a block diagram illustrating a configuration example of a data generation unit.
- FIG. 4 is a diagram showing a configuration example of the differentiator shown in FIG.
- FIG. 5 is a block diagram showing a configuration example for detecting a sample point by performing double differentiation.
- FIG. 6 is a block diagram illustrating a configuration example of the speech synthesis device according to the first embodiment.
- FIG. 7 is a diagram for explaining the principle of speech synthesis according to the present embodiment.
- FIG. 8 is a diagram illustrating the interpolation principle of the present embodiment by extracting a section from time T 1 to T 2 shown in FIG. It is.
- FIG. 9 is a diagram illustrating an example of a sampling function.
- FIG. 10 is an explanatory diagram of an interpolation operation for speech synthesis.
- FIG. 11 is a diagram for explaining an interpolation operation expression that is a specific example of the data interpolation processing.
- FIG. 12 is a block diagram illustrating a configuration example of a speech recognition device according to the second embodiment.
- FIG. 13 is a diagram showing a digital basic waveform used in the second embodiment.
- FIG. 14 is a diagram for explaining an operation example of oversampling and compilation operation of the second embodiment. It is.
- FIG. 15 is a diagram illustrating a function generated from the digital basic waveform according to the second embodiment.
- FIG. 16 is a diagram showing a configuration example of the over one sample circuit shown in FIG. BEST MODE FOR CARRYING OUT THE INVENTION
- FIG. 1 is a block diagram illustrating a configuration example of the speech recognition device according to the present embodiment.
- the speech recognition device of the present embodiment includes a low-pass filter (LPF) 1, an AZD converter 2, a data generation unit 3, a correlation operation unit 4, a data registration unit 5, a data memory 6, a data collating unit 7, and a mode designating unit 8.
- LPF low-pass filter
- the input analog signal is obtained by inputting a voice uttered by a human or the like from a microphone phone (not shown) or the like.
- This input analog signal is converted to digital data by the AZD converter 2 after noise is removed by the LPF 1 to make it easier to detect the sampling points described later. Is replaced.
- the A / D converter 2 performs A / D conversion processing in accordance with the input clock CK0 of a predetermined frequency (for example, 44.1 kHz), and converts the input analog signal into, for example, 16-bit digital data. Convert.
- the audio data digitized by the AZD converter 2 is input to the data generator 3.
- the data generator 3 differentiates the digital data supplied from the AZD converter 2, and detects a sampling point described later according to the differentiation result. Then, the amplitude data representing the amplitude of the digital data at each detection point and the timing data (the number of clocks CK0) representing the time interval between each sampling point are obtained and output.
- FIG. 2 is a diagram for explaining the principle of the data generation process performed by the data generation unit 3.
- the data input to the data generator 3 is digital data obtained by subjecting an input analog signal to AZD conversion.
- the waveform of the digital data output from the AZD converter 2 is converted to analog data. Is shown.
- the numerical values shown in FIG. 2 are for explanation, and do not correspond to actual numerical values.
- the point at which the differential absolute value (signal slope) becomes equal to or less than a predetermined value including “0” (this Are referred to as sampling points.) 102 a to l 102 f are detected. Then, a digital data value representing the amplitude at each of the sample points 102a to 102f and a timing data value representing the time interval at which each sample point 102a to 102f appears are obtained. A pair of the amplitude data value and the timing data is output.
- the timing data value "5" indicating the time interval from the time T1 when the sample point 102a was detected earlier is set to "5". Since the amplitude data value "3" of the sampling point 102b is obtained, the set (5, 3) of these data is output as the data at time T2.
- the evening data value “7” indicating the time interval from the time T2 when the sample point 102b was detected earlier.
- the amplitude data value "9" of the sample point 102c are obtained, and the set (7, 9) of these data values is output as the data at time T3.
- the timing data indicating the time interval between the time T3 and T4, the time between T4 and T5, the time between T5 and T6, and the time T4, ⁇ 5, and ⁇ 6 are detected.
- the pairs (3, 1), (3, 6), and (3, 3) of the sampled points 10 2 d, 102 e, and 102 f with the amplitude values of the sampling points are represented by the time T Output as data in 4, T5, T6.
- FIG. 3 is a block diagram showing a configuration example of the data generation unit 3.
- a differentiator 301 differentiates the digital data input from the AZD converter 2 for each sampling point, and outputs the absolute value to a sampling point detector 302.
- the sample point detector 3 02 Based on the result differentiated by, the sample point where the absolute value of the differential of the digital data is less than a predetermined value is detected.
- FIG. 4 is a diagram showing a configuration example of the differentiator 301.
- the differentiator 301 of the present embodiment is configured by a difference absolute value circuit that calculates a difference absolute value between data of two consecutive sampling points.
- the differentiators 31 and 32 calculate the difference between the data at two consecutive sampling points input from the nodes a and b, respectively. That is, the differentiator 31 calculates the difference a ⁇ b, and the differentiator 32 calculates the difference b ⁇ a, and outputs the results to the ⁇ R circuits 33 and 34, respectively. When the calculated difference value becomes a negative value, these differentiators 31 and 32 output a value of “1” as a polow in addition to the difference value.
- the OR circuit 33 takes a logical sum of the difference value calculated by the differentiator 31 and the output of the pole, and outputs the result to the AND circuit 35.
- another OR circuit 34 takes the logical sum of the difference value calculated by the above-mentioned differentiator 32 and the porosity output, and outputs the result to the AND circuit 35.
- the AND circuit 35 takes the logical product of the outputs from the two OR circuits 33, 34 and outputs the result to the node c. Further, the porosity output of the differentiator 31 is output to the node d, and the difference value calculated by the differentiator 32 is output to the node e.
- the absolute value Ia—bI of the data at two consecutive sampling points is output to the node c, and the data value of the node b is output to the node d at the node d.
- the value of "1" is output.
- the difference value b—a between the data of nodes a and b is output.
- the timing generation section 303 counts the number of clocks CK0 supplied from the detection of one sample point to the detection of the next sample point, and outputs this as timing data. At the same time, it outputs a timing clock indicating the timing of the detection point of each sample point.
- the amplitude generation unit 304 extracts only the digital data at the corresponding sample point position in accordance with the timing clock output from the timing generation unit 303 and outputs it as amplitude data.
- a pair of the amplitude data of each sample point generated by the amplitude generation section 304 in this way and the timing data indicating the time interval between each sample point generated by the timing generation section 303 is shown in FIG. Is output to the correlation calculation unit 4.
- the correlation calculator 4 calculates the correlation between each amplitude data and each timing data output from the data generator 3.
- Various methods are conceivable for this correlation operation.
- the ratio between the respective amplitude data and the ratio between the respective timing data output from the data generator 3 are calculated. For example, assuming that dl, d 2, d 3, d 4,... Are obtained as the amplitude data and t 1, t 2, t 3, t 4,. Calculate the ratio as in the following formulas (la) and (lb).
- the mode designating unit 8 is a mode for registering the correlation data generated by the correlation calculating unit 4 in the data memory 6 as matching data for use in speech recognition processing, or a mode for registering various types of matching registered in the data memory 6. De One of the modes for actually performing the voice recognition processing using the overnight is designated.
- the data registration unit 5 registers the correlation data generated by the correlation calculation unit 4 in the data memory 6 as matching data when the registration mode is specified by the mode specification unit 8.
- the data memory 6 is a recording medium for storing data. In the registration mode of the correlation data (matching data), the data memory 6 captures and records the correlation data generated by the correlation operation unit 4. Also, in the voice recognition mode, the stored correlation data (matching data) is read out and output in response to a request given from the data matching unit 7.
- the data matching unit 7 performs a pattern matching process using the correlation data output from the correlation operation unit 4 and the matching data read from the data memory 6, and obtains a plurality of data stored in the data memory 6. From the matching data, matching data that matches the correlation data from the correlation calculator 4 with a certain probability or more is detected. As a result, the voice input from a microphone or the like (not shown) is recognized as the voice corresponding to the detected matching data, and the recognition result is output to a data processing unit or the like (not shown). As a result, the data processing unit executes a process corresponding to the content of the recognized voice.
- the voice uttered by a human when registering the matching data, the voice uttered by a human is input as an analog signal, and the analog signal is digitized and operated, whereby the differential absolute value of the digital data is obtained. Detect sample points that are less than or equal to a fixed value. Then, amplitude data at the sample point—correlation data relating to the evening and timing data representing the time interval between the sampling points—correlation data relating to the evening are generated and registered as matching data in a recording medium such as the data memory 6. .
- a speech uttered by a human is similarly processed to generate correlation data of amplitude data and correlation data of timing data. Then, by performing pattern matching processing using the correlation data generated in this way and a plurality of matching data registered in the data memory 6 in advance, it is possible to determine what the input voice is, etc. Recognize
- the amplitude data and the timing data generated by the data generation unit 3 are unique data that differs depending on the content of the input voice. Therefore, by performing pattern matching using the correlation data generated from the amplitude data and the timing data, it is possible to perform voice recognition such as what the input voice is.
- the amplitude data and the timing generated by the data generation unit 3 are not used as matching data as they are, but the correlation data having these ratios is used as the matching data. This makes it possible to improve the speech recognition rate.
- the value of the amplitude data and the value of the timing data may be different depending on the size and speed of the utterance at that time. Therefore, if the amplitude data and the timing data are used as matching data as they are, even if the same word is uttered, it may be recognized as an incorrect sound depending on the state of the utterance at that time. .
- the ratio between two consecutive amplitude data and the ratio between two consecutive timing data are calculated as in equations (la) and (lb), respectively.
- the calculation of the correlation data is not limited to this example.
- the denominator and the numerator may be reversed in the calculation of the ratio of the above equations (la) and (lb).
- the ratio between separated data may be calculated.
- the correlation value may be obtained by addition, subtraction, multiplication, or any combination of addition, subtraction, multiplication and division.
- the correlation operation is not limited to the correlation operation using two data, but may be a correlation operation using more data.
- the correlation value of the amplitude data and the correlation value of the timing data are separated.
- the correlation value between the amplitude data and the evening data may be calculated.
- the method of the correlation calculation is not particularly limited as long as the correlation is calculated so that the same voice has almost the same value in any utterance state.
- the correlation operation it is possible to further increase the speech recognition rate.
- the boundary value for judging whether or not the same voice is used in the pattern matching processing by the data matching unit 7, that is, the threshold value for the degree of matching with the matching data is adjusted. This makes it possible to increase the speech recognition rate to some extent.
- the calculated ratio data can be obtained. — You may truncate the decimal places in the evening. In performing the rounding operation, the calculated correlation data may be first multiplied by several times, and then the lower several bits may be rounded.
- the first and last few pieces of correlation data in the series of correlation data are used for pattern matching. You may not.
- correlation data may be obtained without using the first and last several amplitude data and timing data. .
- the amplitude data and the timing data obtained at the beginning and end of the utterance are incorrect. It is also possible that Therefore, pattern matching should be performed excluding the amplitude and timing data generated at the beginning and end of the unreliable utterance, or the correlation data generated from this. By doing so, the speech recognition rate can be further improved.
- the data generation unit 3 of the above embodiment detects, from the data obtained by digitizing the input analog signal, a point at which the differential absolute value of the digital data is equal to or less than a predetermined value including “0” as a sample point.
- the sampling point detection method is not limited to this. For example, from a series of digital data supplied by A / D conversion ⁇ 2, May be detected as a sample point at a position where is small, that is, a position at which the minimum value of the differential absolute value appears.
- the digital data supplied from the AZD converter 2 is differentiated once, and then the obtained absolute differential value is further differentiated to perform double differentiation.
- a point immediately before the polarity of the double differential value changes from minus or zero to plus may be extracted as a sampling point.
- the points extracted based on the polarity of the double differential value only the point where the absolute value of the differential once becomes smaller than a certain value is detected as a regular sample point. May be.
- the polarity of the double derivative value obtained by further differentiating the absolute value of the one-time derivative always changes from minus to plus. Therefore, by finding the double differential value of digital data and detecting the point where the polarity changes from negative to positive (including the point where the double differential value is zero), the minimum point of the absolute value of the first derivative is accurately determined. Can be detected. In this case, even when two minimum points having the same value occur consecutively, one of them can be reliably detected as a sample point. In addition, by detecting only points where the absolute value of the first derivative is smaller than a fixed value as regular sample points, unnecessary points can be prevented from being detected as sample points.
- FIG. 5 is a block diagram showing an example of a configuration for detecting a sample point by performing double differentiation as described above.
- the differentiator 301 and the sample point detector 302 shown in FIG. 2 shows a configuration example.
- the differentiator 301 includes a first differentiating unit 21, a rounding unit 22, and a second differentiating unit 23.
- the sample point detecting section 302 includes a polarity change point detecting section 24 and a threshold processing section 25.
- the first differentiator 2 1 is configured as shown in FIG.
- the digital data supplied from the detector 2 is differentiated for each sampling point, and its absolute value is obtained and output.
- the rounding operation unit 22 performs a process of dropping lower-order bits of the absolute value of the one-time differential calculated by the first differentiating unit 21. This process is performed in order to allow a margin for determining whether or not a sample point is detected when detecting a sample point based on the differential absolute value calculated by the differentiator 301.
- the process of dropping the lower 3 bits by dividing the absolute value of the first derivative by 8 is performed. By doing so, it is possible to avoid the influence of minute fluctuations such as noise, and prevent unnecessary points from being detected as sample points.
- the data output from the rounding operation section 22 is supplied to a second differentiating section 23 and a threshold processing section 25 in the sampling point detecting section 302.
- the second differentiation unit 23 is also configured as shown in FIG. 4, and further differentiates the one-time differential absolute value subjected to the rounding operation by the rounding operation unit 22 for each sampling point.
- the double differential value obtained by the second differentiating unit 23 and the porosity value representing the polarity thereof are supplied to the polarity change point detecting unit 24 in the sampling point detecting unit 302.
- the polarity change point detector 24 is a point just before the polarity of the double differential value supplied from the second differentiator 23 in the differentiator 301 changes from minus to plus, for example, when the polarity is In the case where negative double differential values are obtained continuously, the last negative point or the point where the double differential value becomes zero is extracted as a sample point candidate. Continuous negative differential value with negative polarity The negative point that has not been obtained may be further extracted as a sample point candidate.
- the positive and negative polarities of the differential value are determined based on the porosity value of the differentiator 31 output to the node d shown in FIG.
- the side where the polarity changes may be detected as a sample point.
- the magnitude difference between the absolute values of the sampling points located before and after two consecutive sampling points of the same value is closer to the smaller value.
- a point may be detected as a sample point.
- the threshold processing unit 25 compares the first derivative absolute value supplied from the rounding operation unit 22 with a predetermined threshold value for the sample point candidates extracted by the polarity change point detection unit 24. Only the point where the absolute value of the differential becomes smaller than the threshold value is detected as a normal sample point, and is transmitted to the timing generation unit 303 in FIG.
- the threshold processing is performed using the first-order differential absolute value subjected to the rounding operation by the rounding operation unit 22, but the first differential unit 21 calculates the threshold value.
- threshold processing may be performed using the absolute value of the first derivative before the rounding operation is performed.
- the point immediately before the polarity of the double differential value changes from negative to positive is extracted as a sampling point.
- the point immediately after the change from minus to positive may be extracted as a sample point.
- the sample point may be detected based on a derivative value including polarity without using the absolute value of the derivative.
- a derivative value including polarity without using the absolute value of the derivative.
- the differentiator 301 of FIG. 3 differentiates the digital data input from the AZD converter 2 once.
- the differentiator 301 differentiates the digital data every time the input clock CK0 of the predetermined frequency is given.
- the differential value is obtained by subtracting the immediately preceding data from the current data captured at the timing of a certain input clock CK0. At this time, where there is no data, the default value is used.
- sampling point detector 302 inputs the digital data output from the A / D converter 2 in addition to the differential value calculated by the differentiator 301. Then, based on these data, the point at which the polarity of the differential value of the digital data changes is detected as a sample point.
- the sampling point detector 302 first detects a point where the polarity of the differential value changes from positive to negative, a point where the polarity of the differential value changes from negative to positive, and a point where the differential value becomes 0.
- the point where the polarity of the differential value changes from positive to negative the point having the larger digital data value from the AZD converter 2 is detected as the sample point among the points on both the positive and negative sides.
- the point at which the polarity of the differential value changes from negative to positive the point on the positive and negative sides where the digital data value from the AD converter 2 is smaller is detected as the sample point.
- the point itself is detected as a sample point.
- the timing generation section 303 counts the number of clocks CK0 supplied from the time when one sample point is detected until the time when the next sample point is detected, and outputs this as timing data. A timing clock indicating the timing of the sampling point detection point is output.
- the amplitude generation section 304 extracts only digital data at the corresponding sample point position and outputs it as amplitude data in accordance with the timing clock output from the timing generation section 303.
- text data representing a predetermined word or sentence and a speech signal corresponding to the word or sentence are generated by processing up to the data generation unit 3 in the speech recognition device in FIG.
- a set of the amplitude data and the timing data is associated with each other.
- an interpolation operation as described later is performed by using the amplitude data and the timing data associated with the text data, so that individual Generates interpolated data that interpolates between the amplitudes of D and A, and then D / A converts and outputs the data.
- FIG. 6 is a block diagram illustrating a configuration example of the speech synthesis device according to the present embodiment.
- the speech synthesizer of this embodiment includes a text analysis unit 11, a data readout unit 12, a data memory 13, a timing generator 14, and a D-type flip-flop. 15, an interpolation processing section 16, a DZA converter 17, and an LPF 18.
- the data memory 13 stores a set of amplitude data and timing data generated from speech corresponding to text data representing various syllables, words, sentences, and the like in association with the text data.
- the timing data stored here is the audio data shown in Fig. 1. It is generated by the same processing as the data generation unit 3 of the recognition device.
- the amplitude data generated from the audio corresponding to the text data and the timing data are stored in association with the text data.
- icons, CG data, image data, and other non-text data are stored.
- the amplitude data and the timing data generated from the audio (for example, the commentary audio) corresponding to the data in the format may be stored in association with the data.
- the text analysis unit 11 analyzes the specified desired text data and recognizes the contents such as syllables, words or sentences. When icons, CG data, image data, etc. are specified, they are analyzed to recognize what is specified.
- the data reading unit 12 reads, from the data memory 13, amplitude data and timing data corresponding to the contents of the designated text data and the like based on the analysis result by the text analysis unit 11.
- the timing generator 14 receives the timing data read from the memory 13 and generates a read clock representing an indefinite time interval indicated by the timing data from the input clock CK0 having a predetermined frequency.
- the D-type flip-flop 15 sequentially fetches the amplitude data stored in the data memory 13 in combination with the above-mentioned timing data at a timing according to the read clock generated by the above-mentioned timing generator 14. And outputs it to the interpolation processing unit 16.
- the interpolation processing unit 16 stores the amplitude data of the input / output stage of the D-type flip-flop 15, that is, the amplitude data held in the D-type flip-flop 15 at a certain read clock timing and the next read-out.
- Amplitude data to be held in D-type flip-flop 15 at clock timing (Two amplitude data at two consecutive sample points) are input to the interpolation processing unit 16, the two amplitude data input in this way, and the timing data input from the timing generator 14. Then, a digital interpolation data between each sample point is generated by an interpolation operation or a composition operation described later.
- the digital interpolation data generated in this way is converted into an analog signal by the DZA converter 17, and then output as an analog synthesized voice signal via the LPF 18.
- the principle of the data interpolation processing in the interpolation processing section 16 will be described with reference to FIG.
- the data is a numerical sequence of (*, 7) (5, 3) (7, 9) (3, 1) (3, 6) (3, 3). Note that * indicates that the values are not shown in FIG. Data is read from the data memory 13 in the order shown here.
- data of the waveform a1 is generated from two data values of the amplitude data value “7” and the timing data value “5” read from the data memory 13 by an interpolation operation.
- data of the waveform a2 is generated by interpolation from the two data values of the above-mentioned timing data value "5" and the subsequently read amplitude data value "3".
- data of the waveform b2 is generated by interpolation from two data values of the above-described amplitude data value "3" and the subsequently read timing value "7". Further, data of the waveform bl is generated from the above timing data value "7” and the amplitude data value "9” which is subsequently read out by interpolation. Similarly, the amplitude data read out in order The data of the waveforms cl, c2, d2, d1, el, and e2 are sequentially generated from the combination of the data values and the timing data.
- the digital signal in which the waveforms al, bl, cl, d1, and el are continuous (the upper part of Fig. 7) and the waveforms a2, b2, c2, d2, and e2 are continuous.
- Digital signal (lower part in Fig. 7) is generated. Then, the two digital signals generated in this way are added to each other to perform digital-to-analog conversion, thereby synthesizing an analog audio signal having a waveform as shown in FIG.
- Fig. 8 shows the section of time T l-T 2 shown in Fig. 7, and Fig. 8 (a) shows the two waveforms al and a 2 before addition, and Fig. 8 (b) Indicates a composite waveform a1 + a2 generated by the addition.
- D 1 is the amplitude data value at time T 1 (“7” in the example of FIG. 7)
- D 2 is the amplitude data value at time T 2 (“3” in the example of FIG. 7)
- T represents a timing data value ("5" in the example of FIG. 7) representing a time interval between times T1 and T2
- t represents an arbitrary timing between times T1 and T2.
- the data of the waveform a1 is generated by an interpolation operation while incrementing the value of the timing t by one according to the clock CK0 based on a certain sampling frequency, using the timing t as a variable.
- the data of the waveform a 2 is similarly interpolated using the timing t as a variable. Generate.
- the data of the waveforms a 1 and a 2 generated in this way are By adding mining t as a variable, a waveform as shown in Fig. 8 (b) is synthesized.
- the sampling frequency is increased in a pseudo manner by interpolating between discretely input digital data.
- data interpolation is performed using a predetermined sampling function.
- Figure 9 shows the sinc function as an example of the sampling function.
- FIG. 10 is a diagram for explaining a general data interpolation operation using such a sampling function.
- the values of the discrete data at the equally-spaced sampling points tl, t2, t3, and t4 are represented by Y (t1), ⁇ (t2), ⁇ (t3), ⁇
- Let (t 4) be, for example, a case where an interpolated value y corresponding to a predetermined position t 0 (a distance a from t 2) between the sampling points t 2 and t 3 is considered.
- the value of the sampling function at the interpolation position t 0 is obtained for each given distributed data, and the convolution operation is performed using this. Good. Specifically, the peak height at the center position of the sampling function is matched for each sampling point from t1 to t4, and the value of the sampling function at each interpolation position t0 (X ) And add them all.
- discrete data is obtained by sampling digital data at a time interval at a point where the differential absolute value is equal to or smaller than a predetermined value. Therefore, the intervals between the sample points from which the discrete data were obtained are not always equal, and in many cases are irregular (in the example of Fig. 2, the interval between each sample point is "5,7,3,3,3" is unspecified).
- the sampling function a is calculated based on the time interval between the sampling points at times T1 and T2.
- the convolution operation described above is performed using only 1 and a2, and other sampling functions b1, b2, c1, c2, d1, d2, el, and e2 are not considered in this convolution operation.
- each amplitude data value at times T 1 and T 2 and timing data representing a time interval between times T 1 and T 2 Only sampling functions a 1 and a 2 obtained from the values are used. In other words, all the data necessary to obtain the interpolation value at each interpolation position t between time T1 and T2 has been obtained at time T2, and at this time, the data shown in FIG. 8 (b) is obtained. Therefore, in the present embodiment, two amplitude data values D 1 and D 2 and a timing data value ⁇ representing the time interval are obtained at each discrete time from T 1 to T 6. Each time, a digital waveform is sequentially synthesized by calculating an interpolated value using the data values according to an interpolation operation formula described below.
- FIG. 11 is a diagram for explaining the interpolation formula.
- the interpolated value between two sample points having amplitude data values D l and D 2 is continuous at two intermediate functions X 1 and X 2 at the interpolation position t.
- the interpolated value is calculated using the quadratic functions X 1 and X 2 by dividing the space between the two sample points into the first half and the second half.
- the timing data value T which is the time interval between the sample points, may be odd or even. In the case of odd, a state occurs where the interpolation position t does not come exactly at the intermediate point. Therefore, the timing data value obtained may always be an even number by executing double over sampling at the time of generating the amplitude data and the timing data.
- the five timing data values “5, 7, 3, 3, 3, 3” shown in FIG. 2 are actually “10, 14, 6, 6, 6, 6” by double sampling. Is stored in the memory 13 overnight. In FIG. 11, the time interval between the sample points is represented by 2T after oversampling.
- the clock is successively clocked at twice the original sampling frequency.
- interpolation calculation processing is sequentially performed as a signal sequence including an amplitude data and a timing data value is input for each of the discrete times T1 to T6.
- the amplitude data at each sample point generated from actual speech and the timing data indicating the interval between each sample point are associated with the text data
- an analog audio signal is synthesized and output by interpolation from the corresponding amplitude data and timing data.
- amplitude data and timing data are stored, and the human voice can be recorded.
- Near high-quality speech can be synthesized from text data.
- the read data can be sequentially processed by a simple interpolation operation to synthesize a voice, real-time operation can be realized.
- interpolation calculation processing shown in the above equations (6) and (7) can be realized by an eighty-first hardware configuration such as a logic circuit, or a DSP (Digital Signal Processor) or software (R ⁇ M or RAM, etc.). Professional stored in G).
- a logic circuit or a DSP (Digital Signal Processor) or software (R ⁇ M or RAM, etc.). Professional stored in G).
- the speech recognition apparatus at least before the amplitude data and the timing data are generated by the data generation unit 3, the given digital data is oversampled n times.
- smoother data By performing a moving average calculation or a convolution calculation (hereinafter, referred to as a "compilation operation"), smoother data can be obtained by connecting discrete data by interpolation.
- FIG. 12 is a block diagram showing an example of the overall configuration of the speech recognition device according to the second embodiment.
- the speech recognition apparatus according to the second embodiment differs from the speech recognition apparatus according to the first embodiment shown in FIG. 1 in that an oversampler 9 and a PLL (Phase Locked Loop) are used. Circuit 10 has been added.
- an oversampler 9 and a PLL Phase Locked Loop
- the oversampling circuit 9 is provided between the AZD converter 2 and the data generator 3, and performs n-times oversampling and compilation operation on digital data input from the AZD converter 2. In this way, a digital interpolation value that fills the gap between the discrete data is obtained.
- the oversampling circuit 9 inputs audio data sampled at a frequency of 4 4. ⁇ ⁇ ⁇ , and oversamples it at eight times the frequency (352.8 KHz). Performs the compiling operation. Then, a series of over-sampled data thus obtained is output to the data generator 3.
- the data generation unit 3 outputs a reference from any of the series of oversampled data supplied from the oversampled circuit 9 by any of the methods described above. This point is detected. Then, a set of the detected amplitude data value at each sample point and the timing value indicating the time interval at which each sample point appears is converted into a data set according to the mode specified by the mode specifying unit 8 at that time. Output to registration unit 5 or data collation unit 7.
- the PLL circuit 10 generates a clock CK1 having an eight-fold frequency (352.8 KHz) from the input clock CK0 having a reference frequency (for example, 44.1 KHz), and outputs the clock CK1 having the above-described frequency. It is supplied to the sample circuit 9 and the data generator 3. The over-sample circuit 9 and the data generation section 3 operate in synchronization with the clock CK 1 having the octuple frequency.
- the voice synthesizer according to the first embodiment shown in FIG. However, it is necessary to add a clock generator (not shown).
- This clock generator generates an eight-fold frequency clock CK1 from the reference frequency input clock CK0 and supplies it to the timing generator 14, the interpolation processing unit 16 and the DA conversion unit 17.
- FIG. 13 is an explanatory diagram of a digital basic waveform used in the present embodiment.
- the digital basic waveform shown in Fig. 13 is the data complement by over sampling. This is the basis of the sampling function used when performing the interval.
- This digital basic waveform was created by changing the data value to 11, 11, 8, 8, 1,-1 every clock (CK0) of the reference frequency.
- FIG. 14 shows an example in which the oversampling is performed four times for the sake of illustration, the oversampling circuit 9 in FIG. 12 actually performs eight times oversampling.
- the series of numerical values shown in the leftmost column is a four-fold oversampling of the original discrete data (11, 1, 8, 8, 8, 1,-1) / 8. This is the value that was printed.
- the numerical sequence of four columns from the left to the right is the numerical sequence shown in the leftmost column shifted downward by one.
- the column direction in Fig. 14 shows the time axis. Shifting the numerical sequence downward corresponds to gradually delaying the numerical sequence shown in the leftmost column.
- the second numerical sequence from the left indicates that the numerical sequence shown in the leftmost column is shifted by 14 phases of the 4 ⁇ frequency clock 4 CLK.
- the numerical sequence in the third column from the left is a numerical sequence in which the numerical sequence shown in the second column from the left is shifted by one-fourth phase of 4 ⁇ frequency clock 4 CLK, and the numerical sequence in the fourth column from the left The column shows that the numerical sequence shown in the third column from the left is a numerical sequence further shifted by 1/4 phase of clock 4 CLK of quadruple frequency.
- the fifth numerical column from the left is a value obtained by adding each of the first to fourth numerical columns by the corresponding row and dividing by four. In the processing of the fifth column from the left Thus, a four-fold oversampling with four-phase compositing operations is performed digitally.
- the four numeric columns are the numeric columns shown in the fifth column shifted down one by one. It is.
- the ninth column from the left is a value obtained by adding each of the fifth to eighth columns in the corresponding row and dividing by four. By processing in the ninth column from the left, four times oversampling with four-phase com- positing operations is performed digitally twice.
- the 10th numerical sequence from the left is the numerical sequence shown in the ninth column shifted down by one. Also, the numeric column in the 11th column (the rightmost column) from the left is the value obtained by adding the numeric column in the ninth column and the numeric column in the 10th column by the corresponding rows and dividing by 2. It is. This rightmost numerical sequence is the target interpolation value.
- Figure 15 is a graph of the finally obtained numerical sequence shown in the rightmost column of Fig. 14.
- the function having a waveform as shown in Fig. 15 is once differentiable over the entire area, and has a finite value other than 0 when the sample position t along the horizontal axis is between 1 and 33. However, in other areas, the function has a value of 0.
- FIG. 16 is a block diagram showing a configuration example of the over one sample circuit 9 shown in FIG.
- the oversampling circuit 9 of this embodiment includes a normalized data storage unit 41, a phase shift unit 42, a plurality of digital multipliers 43a to 43d, It comprises a plurality of digital adders 44a to 44c.
- the PLL circuit 10 shown in FIG. 16 is the same as that shown in FIG.
- the normalized data storage unit 41 stores the normalized data sequence shifted to four phases as shown in the rightmost column of FIG. Note that in Figure 14 Shows an example of performing 4 times oversampling on the digital basic waveform shown in Fig. 13, but the oversampling circuit 9 in Fig. 12 performs 8 times oversampling. Therefore, in the normalized data storage unit 41, a data string obtained by oversampling the digital basic waveform by 8 times and normalized by the composition calculation is stored.
- the four-phase normalized data stored in the normalized data storage unit 41 is read out in accordance with clocks CKO and CK1 supplied from the PLL circuit 10, and each of the four digital multipliers 4 3 a to 43 d are supplied to one of the input terminals.
- phase shift unit 42 performs a phase shift process for shifting the phase of the discrete data input from the AZD converter 2 to four phases.
- the four-phase discrete data generated by the phase shift unit 42 is output according to the clocks CKO and CK1 supplied from the PLL circuit 10, and is output from the four digital multipliers 43a to 43d, respectively. It is supplied to the other input terminal.
- the four digital multipliers 4 3a to 4 3d store the four-phase normalized data output from the normalized data storage unit 41 and the four-phase normalized data output from the phase shift unit 42. Multiply by discrete data.
- the three digital adders 44 a to 44 c connected at the subsequent stage add all the multiplication results of the above four digital multipliers 43 a to 43 d and plot the addition results. 1 Output to 2 data generation unit 3.
- the normalized data in the rightmost column obtained by the composition operation shown in FIG. Is stored in the normalized data storage unit 41. Then, the normalized data is modulated into an amplitude corresponding to the value of the input discrete data, and the obtained data is synthesized and output by a four-phase composition operation. The amplitude value of the input discrete data is multiplied by the digital basic waveform shown in Fig. 13, and the resulting data value is subjected to a composition operation as shown in Fig. 14 during speech recognition. If the oversampler circuit 9 is configured as shown in FIG. 16, it is not necessary to perform the computational operation itself in FIG. It has the advantage that it can be
- the oversampling circuit 9 performs oversampling of eight times, but is not limited to eight times. For example, it may be 2 times or 4 times.
- the input discrete digital data is subjected to oversampling and a composition operation to smoothly change. Then, from the obtained oversampled data, discrete amplitude data values and timing data values representing their indefinite time intervals are obtained.
- the function generated from the digital fundamental waveform when performing oversampling and the compositing operation is a finite-level sampling function whose value converges to 0 at a finite sample position, and A possible function. Therefore, when obtaining a certain interpolated value, only a limited number of discrete data values need to be considered, and the amount of processing can be greatly reduced. Moreover, since no truncation error occurs, an accurate interpolated value can be obtained. When speech recognition processing is performed using this interpolated value, the speech recognition rate can be improved.
- the speech recognition / speech synthesis methods according to the first and second embodiments described above can be realized by any of a hardware configuration, a DSP, and software.
- a place realized by software the speech recognition device and the speech synthesis device according to the present embodiment are actually configured by a CPU or MPU, RAM, ROM, etc. of a convenience store, and a program stored in the RAM or ROM operates. This can be achieved.
- the present invention can be realized by recording a program that causes a computer to perform the functions of the present embodiment on a recording medium such as a CD-ROM, and reading the program into the computer.
- a recording medium for recording the above program a floppy disk, a hard disk, a magnetic tape, a magneto-optical disk, a nonvolatile memory card, or the like can be used in addition to the CD-ROM.
- the computer executes the supplied program to realize the functions of the above-described embodiment, and also executes the OS (operating system) or other operating system on which the program is running on the computer.
- OS operating system
- the functions of the above-described embodiment are realized in cooperation with application software or the like, or all or a part of the processing of the supplied program is performed by a function expansion board or a function expansion unit of the computer.
- Such a program is also included in the embodiment of the present invention when the functions of the above-described embodiment are realized.
- the present invention relates to a method for detecting amplitude data and timing It provides an unprecedented new speech recognition method and speech synthesis method that uses speech data, and is useful for improving speech recognition, improving the quality of synthesized speech, and simplifying processing.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Electrically Operated Instructional Devices (AREA)
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP01919863A EP1288912A4 (en) | 2000-04-14 | 2001-04-10 | "LANGUAGE RECOGNITION PROCEDURE AND DISTRICT, LANGUAGE SYNTHESIS PROCEDURE AND RECIPIENT, RECORD MEDIUM" |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2000114262A JP2001296883A (ja) | 2000-04-14 | 2000-04-14 | 音声認識方法および装置、音声合成方法および装置、記録媒体 |
JP2000-114262 | 2000-04-14 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2001080222A1 true WO2001080222A1 (fr) | 2001-10-25 |
Family
ID=18626092
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2001/003079 WO2001080222A1 (fr) | 2000-04-14 | 2001-04-10 | Procede et dispositif de reconnaissance vocale, procede et dispositif de synthese vocale, support d'enregistrement |
Country Status (7)
Country | Link |
---|---|
US (1) | US20030093273A1 (ja) |
EP (1) | EP1288912A4 (ja) |
JP (1) | JP2001296883A (ja) |
KR (1) | KR20030003252A (ja) |
CN (1) | CN1195293C (ja) |
TW (1) | TW569180B (ja) |
WO (1) | WO2001080222A1 (ja) |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI243356B (en) * | 2003-05-15 | 2005-11-11 | Mediatek Inc | Method and related apparatus for determining vocal channel by occurrences frequency of zeros-crossing |
CN100375996C (zh) * | 2003-08-19 | 2008-03-19 | 联发科技股份有限公司 | 判断声音信号中是否混有低频声音信号的方法及相关装置 |
CN100524457C (zh) * | 2004-05-31 | 2009-08-05 | 国际商业机器公司 | 文本至语音转换以及调整语料库的装置和方法 |
JP3827317B2 (ja) * | 2004-06-03 | 2006-09-27 | 任天堂株式会社 | コマンド処理装置 |
JP4204541B2 (ja) | 2004-12-24 | 2009-01-07 | 株式会社東芝 | 対話型ロボット、対話型ロボットの音声認識方法および対話型ロボットの音声認識プログラム |
CN100349206C (zh) * | 2005-09-12 | 2007-11-14 | 周运南 | 文字语音互转装置 |
JP4455633B2 (ja) * | 2007-09-10 | 2010-04-21 | 株式会社東芝 | 基本周波数パターン生成装置、基本周波数パターン生成方法及びプログラム |
JP2010190955A (ja) * | 2009-02-16 | 2010-09-02 | Toshiba Corp | 音声合成装置、方法及びプログラム |
KR101126614B1 (ko) * | 2010-01-28 | 2012-03-26 | 황여실 | 음향신호 출력 장치 |
JP2012003162A (ja) * | 2010-06-18 | 2012-01-05 | Adtex:Kk | 人工的に有声音を生成する方法および有声音生成装置 |
CN109731331B (zh) * | 2018-12-19 | 2022-02-18 | 网易(杭州)网络有限公司 | 声音信息处理方法及装置、电子设备、存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01149099A (ja) * | 1987-12-05 | 1989-06-12 | Murakami Kogyosho:Kk | 信号の識別装置 |
JPH10247099A (ja) * | 1997-03-05 | 1998-09-14 | Dainippon Printing Co Ltd | 音声信号の符号化方法および音声の記録再生装置 |
JPH1173199A (ja) * | 1997-08-29 | 1999-03-16 | Dainippon Printing Co Ltd | 音響信号の符号化方法およびコンピュータ読み取り可能な記録媒体 |
JP6077198B2 (ja) * | 2011-05-11 | 2017-02-08 | Dowaエレクトロニクス株式会社 | 六方晶フェライト凝集粒子 |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4214125A (en) * | 1977-01-21 | 1980-07-22 | Forrest S. Mozer | Method and apparatus for speech synthesizing |
US4181813A (en) * | 1978-05-08 | 1980-01-01 | John Marley | System and method for speech recognition |
US6898277B1 (en) * | 2001-03-05 | 2005-05-24 | Verizon Corporate Services Group Inc. | System and method for annotating recorded information from contacts to contact center |
-
2000
- 2000-04-14 JP JP2000114262A patent/JP2001296883A/ja active Pending
-
2001
- 2001-04-10 KR KR1020027013658A patent/KR20030003252A/ko not_active Application Discontinuation
- 2001-04-10 US US10/240,664 patent/US20030093273A1/en not_active Abandoned
- 2001-04-10 EP EP01919863A patent/EP1288912A4/en not_active Withdrawn
- 2001-04-10 WO PCT/JP2001/003079 patent/WO2001080222A1/ja not_active Application Discontinuation
- 2001-04-10 CN CNB018080219A patent/CN1195293C/zh not_active Expired - Fee Related
- 2001-04-12 TW TW090108811A patent/TW569180B/zh active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01149099A (ja) * | 1987-12-05 | 1989-06-12 | Murakami Kogyosho:Kk | 信号の識別装置 |
JPH10247099A (ja) * | 1997-03-05 | 1998-09-14 | Dainippon Printing Co Ltd | 音声信号の符号化方法および音声の記録再生装置 |
JPH1173199A (ja) * | 1997-08-29 | 1999-03-16 | Dainippon Printing Co Ltd | 音響信号の符号化方法およびコンピュータ読み取り可能な記録媒体 |
JP6077198B2 (ja) * | 2011-05-11 | 2017-02-08 | Dowaエレクトロニクス株式会社 | 六方晶フェライト凝集粒子 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1288912A4 * |
Also Published As
Publication number | Publication date |
---|---|
CN1195293C (zh) | 2005-03-30 |
US20030093273A1 (en) | 2003-05-15 |
JP2001296883A (ja) | 2001-10-26 |
TW569180B (en) | 2004-01-01 |
KR20030003252A (ko) | 2003-01-09 |
EP1288912A4 (en) | 2005-09-28 |
CN1423809A (zh) | 2003-06-11 |
EP1288912A1 (en) | 2003-03-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2270773A1 (en) | Apparatus and method for creating singing synthesizing database, and pitch curve generation apparatus and method | |
WO2001080222A1 (fr) | Procede et dispositif de reconnaissance vocale, procede et dispositif de synthese vocale, support d'enregistrement | |
WO2001080430A1 (fr) | Procede et dispositif de compression, procede et dispositif de decompression, systeme de compression/decompression et support d'enregistrement | |
EP1569199B1 (en) | Musical composition data creation device and method | |
US6791482B2 (en) | Method and apparatus for compression, method and apparatus for decompression, compression/decompression system, record medium | |
JP2005241997A (ja) | 音声解析装置、音声解析方法及び音声解析プログラム | |
JPH0736455A (ja) | 音楽イベントインデックス作成装置 | |
JPH0315758B2 (ja) | ||
US20070011001A1 (en) | Apparatus for predicting the spectral information of voice signals and a method therefor | |
JP4265908B2 (ja) | 音声認識装置及び音声認識性能改善方法 | |
JPH07319498A (ja) | 音声信号のピッチ周期抽出装置 | |
JPS6097397A (ja) | 音響解析装置 | |
JPH11259066A (ja) | 音楽音響信号分離方法、その装置およびそのプログラム記録媒体 | |
JP2001136073A (ja) | 圧縮方法及び装置、圧縮伸長システム、記録媒体 | |
US4601052A (en) | Voice analysis composing method | |
JP2000293188A (ja) | 和音リアルタイム認識方法及び記憶媒体 | |
JP4221986B2 (ja) | 音声認識装置 | |
JPS5816297A (ja) | 音声合成方式 | |
JP4524866B2 (ja) | 音声認識装置、及び音声認識方法 | |
JPH0632037B2 (ja) | 音声合成装置 | |
JP3206128B2 (ja) | ループ波形生成装置及びループ波形生成方法 | |
JP2707577B2 (ja) | ホルマント抽出装置 | |
JPS62144200A (ja) | 連続音声認識装置 | |
JPS6265091A (ja) | 連続音声認識方式 | |
Laichuthai et al. | Synchronization between motion and music using motion graph |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): CN KR US |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
DFPE | Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101) | ||
WWE | Wipo information: entry into national phase |
Ref document number: 10240664 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020027013658 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 018080219 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2001919863 Country of ref document: EP |
|
WWP | Wipo information: published in national office |
Ref document number: 1020027013658 Country of ref document: KR |
|
WWP | Wipo information: published in national office |
Ref document number: 2001919863 Country of ref document: EP |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 1020027013658 Country of ref document: KR |
|
WWW | Wipo information: withdrawn in national office |
Ref document number: 2001919863 Country of ref document: EP |