WO2022105693A1 - 样本生成方法及装置 - Google Patents
样本生成方法及装置 Download PDFInfo
- Publication number
- WO2022105693A1 WO2022105693A1 PCT/CN2021/130459 CN2021130459W WO2022105693A1 WO 2022105693 A1 WO2022105693 A1 WO 2022105693A1 CN 2021130459 W CN2021130459 W CN 2021130459W WO 2022105693 A1 WO2022105693 A1 WO 2022105693A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- audio
- text
- pair
- detected
- segment
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 105
- 238000012549 training Methods 0.000 claims abstract description 133
- 238000012216 screening Methods 0.000 claims abstract description 10
- 239000012634 fragment Substances 0.000 claims description 75
- 238000012545 processing Methods 0.000 claims description 66
- 238000001514 detection method Methods 0.000 claims description 61
- 238000005070 sampling Methods 0.000 claims description 58
- 230000015572 biosynthetic process Effects 0.000 claims description 45
- 238000003786 synthesis reaction Methods 0.000 claims description 45
- 230000011218 segmentation Effects 0.000 claims description 23
- 238000004364 calculation method Methods 0.000 claims description 15
- 238000012360 testing method Methods 0.000 abstract 2
- 230000008569 process Effects 0.000 description 28
- 238000002360 preparation method Methods 0.000 description 22
- 230000033764 rhythmic process Effects 0.000 description 11
- 230000000694 effects Effects 0.000 description 9
- 238000010586 diagram Methods 0.000 description 7
- 238000001308 synthesis method Methods 0.000 description 7
- 230000009471 action Effects 0.000 description 6
- 238000013528 artificial neural network Methods 0.000 description 6
- 238000001228 spectrum Methods 0.000 description 4
- 238000005516 engineering process Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000009432 framing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000000275 quality assurance Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000011426 transformation method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/04—Segmentation; Word boundary detection
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/02—Methods for producing synthetic speech; Speech synthesisers
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/06—Elementary speech units used in speech synthesisers; Concatenation rules
- G10L13/07—Concatenation rules
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L13/00—Speech synthesis; Text to speech systems
- G10L13/08—Text analysis or generation of parameters for speech synthesis out of text, e.g. grapheme to phoneme translation, prosody generation or stress or intonation determination
- G10L13/10—Prosody rules derived from text; Stress or intonation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
Definitions
- This specification relates to the technical field of data processing, and in particular, to a method and device for generating samples.
- speech synthesis also known as text-to-speech technology
- TTS text-to-speech
- the waveform splicing method requires a long time of training data to complete speech synthesis
- the parameter-based synthesis method Although speech synthesis can be completed, the reference factors are few, resulting in an unsatisfactory final synthesis result; the most widely used end-to-end synthesis method based on neural network in the prior art is the amount of data required by this method.
- the embodiments of the present specification provide a sample generation method.
- This specification also relates to a sample generating apparatus, a computing device, and a computer-readable storage medium, so as to solve the technical defects existing in the prior art.
- a sample generation method including:
- each text-audio pair contains a text segment and an audio segment
- the text-audio pair to be detected meets the preset detection condition, the text-audio pair to be detected is written into the training database.
- the acquiring multiple text-audio pairs includes:
- the audio is preprocessed to obtain target audio, and the target text is converted into a phoneme sequence;
- the phoneme sequence is aligned with the target audio, and the plurality of text-audio pairs are generated according to the alignment result.
- the generating the multiple text-audio pairs according to the alignment processing result includes:
- Segment the phoneme audio file according to the segmentation position to obtain a plurality of phoneme audio pairs, wherein each phoneme audio pair includes a phoneme segment and an audio segment;
- the plurality of text-audio pairs are generated according to the text segments corresponding to the phoneme segments in each phoneme-audio pair and the audio segments in each phoneme-audio pair.
- the calculating the audio feature of the audio segment of each text-audio pair in the plurality of text-audio pairs includes:
- the audio feature of the audio segment of each text-audio pair is determined according to the pitch frequency feature and the audio frame feature of the audio segment of each text-audio pair.
- filtering out the target text-audio pair and the spliced text-audio pair corresponding to the target text-audio pair from the multiple text-audio pairs according to the audio feature including:
- Integrate audio clips, text clips and audio features of each text-audio pair in the multiple text-audio pairs obtain a text-audio package corresponding to each text-audio pair, and write into the fragment database;
- the fragment database select any text audio package as the target text audio package, and determine the text audio pair in the target text audio package as the target text audio pair;
- a spliced text-audio package is determined based on the text-audio package other than the target text-audio package and the audio feature in the segment database, and the text-audio pair in the spliced-text-audio package is used as the spliced-text-audio pair.
- determining the splicing text audio package based on the text audio package other than the target text audio package and the audio feature in the fragment database including:
- the spliced text-audio package is filtered out of the to-be-screened text-audio package set.
- the spliced text is screened out from the set of text-audio packets to be screened.
- Audio package including:
- the to-be-screened text-audio package to which the to-be-screened text-audio pair whose characteristic distance is smaller than the preset distance threshold belongs is determined as the spliced text-audio package.
- step of splicing the target text-audio pair and the splicing text-audio pair into a to-be-detected text-audio pair before the step of splicing the target text-audio pair and the splicing text-audio pair into a to-be-detected text-audio pair, and performing the step of detecting the to-be-detected text-audio pair, further comprising:
- splicing the target text-audio pair and the splicing text-audio pair into a to-be-detected text-audio pair including:
- the target text fragment and the spliced text fragment are spliced into a text fragment to be detected, and the target audio fragment and the spliced audio fragment are spliced into an audio fragment to be detected;
- the to-be-detected text-audio pair is formed based on the to-be-detected text segment and the to-be-detected audio segment.
- the detecting the to-be-detected text-audio pair includes:
- writing the text-audio pair to be detected into the training database includes:
- the text-audio pair to be detected is written into the training database.
- the method further includes:
- the method further includes:
- a speech synthesis model is trained based on the sample text segment and the sample audio segment to obtain a target speech synthesis model.
- a sample generating apparatus including:
- an acquisition module configured to acquire a plurality of text-audio pairs, wherein each text-audio pair includes a text segment and an audio segment;
- the calculation module is configured to calculate the audio feature of the audio segment of each text-audio pair in the plurality of text-audio pairs, and filter out the target text-audio pair and all the text-audio pairs from the plurality of text-audio pairs according to the audio feature. Describe the spliced text-audio pair corresponding to the target text-audio pair;
- a splicing module configured to splicing the target text-audio pair and the splicing text-audio pair into a to-be-detected text-audio pair, and to detect the to-be-detected text-audio pair;
- the writing module is configured to write the text-audio pair to be detected into a training database when the text-audio pair to be detected meets a preset detection condition.
- a computing device including:
- the memory is used to store computer-executable instructions
- the processor is used to execute the computer-executable instructions:
- each text-audio pair contains a text segment and an audio segment
- the text-audio pair to be detected satisfies the preset detection condition
- the text-audio pair to be detected is written into a training database.
- a computer-readable storage medium which stores computer-executable instructions, which implement the steps of the sample generation method when the instructions are executed by a processor.
- This specification provides a sample generation method, after acquiring multiple text-audio pairs, calculating the audio feature of the audio segment of each text-audio pair in the multiple text-audio pairs, and generating the The target text-audio pair and the spliced text-audio pair corresponding to the target text-audio pair are screened out from the text-audio pairs, and then the target text-audio pair and the spliced-text-audio pair are spliced into the to-be-detected text-audio pair, and Detecting the to-be-detected text-audio pair, and writing the to-be-detected text-audio pair into the training database when the to-be-detected text-audio pair satisfies a preset detection condition, to achieve in the sample data preparation stage, High-quality sample data that meets the needs of downstream
- FIG. 1 is a flowchart of a sample generation method provided by an embodiment of the present specification
- FIG. 2 is a schematic diagram of an alignment processing result in a sample generation method provided by an embodiment of the present specification
- FIG. 3 is a schematic diagram of a segmentation processing result in a sample generation method provided by an embodiment of the present specification
- FIG. 5 is a flowchart of a sample generation method applied in a speech synthesis scenario provided by an embodiment of this specification
- FIG. 6 is a schematic structural diagram of a sample generating apparatus provided by an embodiment of the present specification.
- FIG. 7 is a structural block diagram of a computing device provided by an embodiment of the present specification.
- F0 fundamental frequency
- the general sound is composed of a series of vibrations with different frequencies and amplitudes emitted by the sounding body; among these vibrations, there is a vibration with the lowest frequency, and the sound emitted by it is the fundamental sound.
- the corresponding frequency is the fundamental frequency.
- Forced Alignment A technique for obtaining the temporal correspondence of a given phoneme sequence and speech, which can be achieved by forced alignment tools such as kaldi, an open source speech recognition tool (Toolkit) that uses WFST to implement the decoding algorithm ) or HTK (HMM Toolkit, a speech processing tool based on the hmm model), etc., to achieve the alignment of phoneme sequences and audio.
- forced alignment tools such as kaldi, an open source speech recognition tool (Toolkit) that uses WFST to implement the decoding algorithm ) or HTK (HMM Toolkit, a speech processing tool based on the hmm model), etc.
- a phoneme is the smallest phonetic unit divided according to the natural properties of speech. It is analyzed according to the pronunciation action in the syllable, and an action constitutes a phoneme. Phonemes are divided into vowels and consonants. For example, the Chinese syllable ah ( ⁇ ) has only one phoneme, love (ài) has two phonemes, and dai (dài) has three phonemes, etc. In Chinese, phonemes are pinyin; in English, phonemes are phonetic symbols.
- a sample generation method is provided, and the specification also relates to a sample generation apparatus, a computing device, and a computer-readable storage medium, which will be described in detail in the following embodiments.
- This specification provides a sample generation method, after acquiring multiple text-audio pairs, calculating the audio feature of the audio segment of each text-audio pair in the multiple text-audio pairs, and generating the The target text-audio pair and the spliced text-audio pair corresponding to the target text-audio pair are screened out from the text-audio pairs, and then the target text-audio pair and the spliced-text-audio pair are spliced into the to-be-detected text-audio pair, and Detecting the to-be-detected text-audio pair, and writing the to-be-detected text-audio pair into the training database when the to-be-detected text-audio pair satisfies a preset detection condition, to achieve in the sample data preparation stage, High-quality sample data that meets the needs of downstream business use can be obtained by splicing, which saves the cost of resource consumption in the data preparation stage, and the amount of sample data written into the training database after splicing is large, which effectively solves the problem.
- FIG. 1 shows a flowchart of a sample generation method according to an embodiment of the present specification, which specifically includes the following steps:
- Step S102 acquiring a plurality of text-audio pairs, wherein each text-audio pair includes a text segment and an audio segment.
- the text-audio pair specifically refers to a queue composed of text segments and audio segments that have a corresponding relationship.
- the text segments include but are not limited to word units, word units, or sentence units.
- the audio segments include but are not limited to Phonetics that match word units, word units, or sentence units.
- sample data is realized by splicing, so that a large amount of sample data can be spliced for downstream business, and in order to ensure the quality requirements of sample data, the splicing process will be completed in combination with audio features, so as to complete the preparation of sample data.
- any one of a small number of texts is used as an example to describe the sample generation method.
- a text-audio pair in this embodiment, the specific implementation is as follows:
- the audio is preprocessed to obtain target audio, and the target text is converted into a phoneme sequence;
- the phoneme sequence is aligned with the target audio, and the plurality of text-audio pairs are generated according to the alignment result.
- the target text includes but is not limited to an article or a sentence, etc.
- the audio specifically refers to the voice generated for the target text, and the audio corresponding to the target text can be recorded or It is generated by speech synthesis, and this embodiment does not make any limitation here.
- the matching degree of the audio and the target text is relatively high, so as to ensure that more data can be written into the training database during subsequent splicing.
- the target audio specifically refers to the audio obtained by standardizing the audio
- the phoneme sequence specifically refers to a sequence composed of the smallest units that constitute the target text
- the alignment process specifically refers to the Find the time interval corresponding to the text in the audio.
- the alignment process will be completed starting from the smallest unit of text when the text-audio pair is generated, that is, when the target text is obtained.
- the audio is preprocessed to obtain the target audio, and the part that can cause interference to the subsequent processing in the audio is removed, such as the blank audio segment at the beginning and/or the end of the audio ( Unvoiced audio clips) or loud audio clips at the beginning and/or end of the audio (the pronunciation content of the audio clip cannot be distinguished), etc.;
- the target text is converted into a phoneme sequence, which is completed by the smallest unit Align text and audio to improve alignment accuracy; finally, perform alignment processing on the phoneme sequence and the target audio, and obtain the multiple text-audio pairs according to the alignment processing result.
- the kaldi alignment tool or the HTK alignment tool can be used to complete; in addition, other alignment tools can also be selected according to actual needs to complete the phoneme sequence and the target audio.
- the alignment of the target audio is not limited in this embodiment.
- the specific implementation method is as follows:
- Segment the phoneme audio file according to the segmentation position to obtain a plurality of phoneme audio pairs, wherein each phoneme audio pair includes a phoneme segment and an audio segment;
- the plurality of text-audio pairs are generated according to the text segments corresponding to the phoneme segments in each phoneme-audio pair and the audio segments in each phoneme-audio pair.
- the phoneme audio file specifically refers to a file obtained by aligning the phoneme sequence and the target audio;
- the segmentation position may be the position of the sentence segment in the target audio or the position where the pronunciation interruption time exceeds the set time threshold;
- the phoneme-audio pair specifically refers to a queue composed of phoneme segments and audio segments having a corresponding relationship.
- the phoneme audio file is segmented to obtain a plurality of phoneme audio pairs, wherein each phoneme audio pair includes a phoneme fragment and its corresponding audio fragment, and then based on the target text.
- the phoneme segment of each phoneme-audio pair is converted into a text segment, so that the text-audio pair is formed according to the text segment and the audio segment corresponding to the phoneme segment in each phoneme-audio pair, and the text-audio pair includes the text segment and its corresponding audio clip.
- the multiple text-audio pairs formed at this time can realize the splicing of sample data for writing into the training database in subsequent processing, and complete the preparation of the sample data.
- the segmented phoneme can be realized.
- the phoneme segments and audio segments included in the audio pair are also corresponding; and according to the characteristics of the user's speech, the phoneme segments included in the segmented phoneme audio pair can ensure that the corresponding text segment is found in the target text, and will not There is a problem that the phoneme fragment is incomplete after being segmented.
- the target text is "I watched a wonderful football game", and 12s of audio is generated for the target text.
- the blank audio segments at the beginning and end of the audio will be deleted.
- the target audio with a time length of 10s is obtained, and in order to improve the alignment accuracy, the phoneme sequence corresponding to the target text "I watched a wonderful football game” can be converted (wo kan le yi chang jing cai de zu qiu bi sai) , and use the kaldi alignment tool to align the phoneme sequence and the target audio, and obtain the alignment processing result shown in Figure 2, that is, the phoneme audio file composed of the phoneme sequence and the target audio.
- the phoneme audio file is segmented according to the segmentation positions to obtain five phoneme audio pairs, and the first phoneme audio pair P1 consists of the first phoneme segment (wo kan le) and the first phoneme audio pair.
- Audio segment (0s ⁇ 3s); the second phoneme audio pair P2 consists of the second phoneme segment (yi chang) and the second audio segment (3s ⁇ 4s); the third phoneme audio pair P3 consists of the third phoneme segment ( jing cai de) and the third audio segment (4s ⁇ 6s); the fourth phoneme audio pair P 4 consists of the fourth phoneme segment (zu qiu) and the fourth audio segment (6s ⁇ 8s); the fifth phoneme audio pair P 5 consists of a fifth phoneme segment (bi sai) and a fifth audio segment (8s-10s).
- the phoneme audio pairs P 1 to P 5 are obtained, it is also necessary to convert the phoneme segments in each phoneme audio pair into text segments, so as to obtain text-audio pairs that can be used for subsequent splicing processing.
- the target text "I watched a wonderful football game” can determine the text segment corresponding to the phoneme segment in each phoneme audio pair, that is, the first phoneme segment (wo kan le) corresponding to the first phoneme segment (wo kan le) included in the first phoneme audio pair P1.
- the text segment is (I watched); the second text segment corresponding to the second phoneme segment (yi chang) contained in the second phoneme audio pair P2 is (one field); the third phoneme audio pair P3 contains the The third text fragment corresponding to the triphone fragment (jing cai de) is (wonderful); the fourth text fragment corresponding to the fourth phoneme fragment (zu qiu) contained in the fourth phoneme audio pair P4 is (football); The fifth text segment corresponding to the fifth phoneme segment (bi sai) contained in the pentaphone audio pair P5 is (match).
- a plurality of text-audio pairs corresponding to the target text and the target audio can be generated according to the obtained text fragments and audio fragments, as shown in the segmentation result shown in FIG. (I watched) and the first audio segment (0s ⁇ 3s);
- the second text-audio pair TA 2 consists of the second text segment (one field) and the second audio segment (3s ⁇ 4s);
- the third text-audio pair TA 3 consists of a third text segment (wonderful) and a third audio segment (4s ⁇ 6s);
- the fourth text-audio pair TA 4 consists of a fourth text segment (soccer) and a fourth audio segment (6s ⁇ 8s);
- the fifth text-audio pair TA 5 is composed of a fifth text segment (competition) and a fifth audio segment (8s to 10s); it is used for subsequent splicing out of sample data that meets the requirements for writing into the training database, and is used when training a speech synthesis model .
- Step S104 calculating the audio feature of the audio segment of each text-audio pair in the multiple text-audio pairs, and filtering out the target text-audio pair and the target text from the multiple text-audio pairs according to the audio feature
- the audio pair corresponds to the concatenated text-audio pair.
- the text-audio pairs written into the training database are used for training the model, in order to improve the prediction accuracy of the trained model , it is also necessary to ensure the quality of the sample data used by the training model, that is, when splicing out a text-audio pair that can be written into the training database, it is also necessary to consider the timbre and rhythm between the text-audio pairs before splicing;
- the two text-audio pairs are different or similar in terms of timbre and rhythm, or the pitch fluctuations are inconsistent, then the spliced text-audio pair has the problem of mismatching audio fragments and inconsistent semantics between the text fragments before and after. cannot be used to train the model.
- the present application will calculate each text-audio pair before splicing the text-audio pair. Then, based on the audio features, select text-audio pairs that can be spliced from multiple text-audio pairs, and realize the splicing of text-audio pairs with similar attributes such as pitch and rhythm, so as to obtain continuous and Text-audio pairs with semantically consistent text segments are obtained to obtain high-quality text-audio pairs for use in subsequent training models.
- the audio features include but are not limited to features that characterize the pitch frequency of audio clips, features of audio frames and/or features of audio frame energy, etc.
- the audio features of the audio clips in the text-audio pairing it is possible to analyze the audio features of the audio clips that need to be spliced Whether the text-audio pair is suitable for splicing, that is, through the fundamental frequency feature, the feature of the audio frame and/or the energy feature of the audio frame, it is determined whether the pitch, rhythm and other attributes between the text-audio pairs that need to be spliced are similar or the same.
- the audio feature filters out the spliced text-audio pair from the plurality of text-audio pairs; the target text-audio pair specifically refers to a reference text-audio pair, and the spliced-text-audio pair is a pair that satisfies the reference text-audio pair. Text-audio pairs for splicing conditions.
- each the audio features of the audio clips in the text-audio pair in order to obtain text-audio pairs that can be spliced with each other (that is, the timbre and rhythm between the text-audio pairs are similar or the same) to generate more sample data, each the audio features of the audio clips in the text-audio pair, and after determining the target text-audio pair, the audio features of the audio clips in the target text-audio pair and the audio features of the audio clips of the respective text-audio pairs in the plurality of text-audio pairs , screen out the splicing text-audio pairs corresponding to the target text-audio pairs from the plurality of text-audio pairs, for subsequent generation of sample data, so that when a large amount of sample data is spliced out, not only the number of sample data is satisfied It is also required to combine audio features to ensure the similarity between the text-audio pairs before splicing, thereby improving the quality of the text-audio pairs after splicing.
- the audio features of each text-audio pair can be Fragment processing is performed on the clips, and the audio features are analyzed through audio frames.
- the specific implementation is as follows:
- the audio feature of the audio segment of each text-audio pair is determined according to the pitch frequency feature and the audio frame feature of the audio segment of each text-audio pair.
- the fundamental frequency feature specifically refers to the frequency value corresponding to the vibration with the lowest frequency among a series of vibrations with different frequencies and amplitudes emitted by the sound body in the audio segment
- the audio frame feature specifically refers to the audio After the audio frame in the clip is Fourier transformed, the points on the spectrum are calculated to obtain the frame energy value; correspondingly, the pitch frequency feature can be used to analyze whether the pronunciation vibration amplitudes between the text and audio pairs are spliced.
- the audio frame feature can be used to analyze whether the energy distribution between the text-audio pairs is similar or the same when splicing; to realize the selection of the text-audio pair with better effect after splicing through the pitch frequency and frame energy Splicing is performed to obtain sample data that meets the needs of use.
- the audio frame calculates the fundamental frequency feature and audio frame feature of the audio segment in each text-audio pair, and finally determines the audio segment of each text-audio pair according to the fundamental frequency feature and audio frame feature of the audio segment of each text-audio pair. audio characteristics.
- the audio segment of each text-audio pair can be calculated when calculating the audio features. start audio features (start pitch frequency feature and start audio frame feature) and end audio features (end pitch frequency feature and end audio frame feature), and then filter the target audio from the plurality of text-audio pairs
- start audio features start pitch frequency feature and start audio frame feature
- end audio features end pitch frequency feature and end audio frame feature
- the initial audio feature after the calculation, the splicing text-audio pair is screened out, and then when the target text-audio pair and the splicing-text-audio pair are spliced, the target text-audio pair can be used as the initial text-audio pair, The spliced text-audio pair is used as the ending text-audio pair, and the two are spliced in order to obtain the to-be-detected text-audio pair that needs to be detected subsequently.
- the target text-audio pair can also be used as the ending text-audio pair, and then based on the starting audio feature of the audio clip in the target text-audio pair and the ending audio feature of the audio clip in each text-audio pair, the described When splicing the text-audio pair, and then splicing the target text-audio pair and the splicing text-audio pair, the target text-audio pair can be used as the ending text-audio pair, and the splicing text-audio pair can be used as the starting text-audio pair.
- splicing the two in order to obtain the text-audio pair to be detected that needs to be detected later; and in this process, the target text-audio pair has been used as the starting text-audio pair and the ending text-audio pair and other text-audio pairs Possibility of splicing is performed.
- the processing process of splicing with the target text-audio pair can be omitted, so as to improve the processing efficiency in the subsequent splicing process.
- the calculation of the pitch frequency feature can be realized by a time domain estimation method, that is, the pitch frequency is estimated directly from the audio waveform, such as an autocorrelation method, a parallel processing method, an average amplitude difference method or a data reduction method; it can also be achieved by transforming
- the method is realized by transforming the audio speech signal into the frequency domain or time domain to estimate the pitch frequency. First, the influence of the channel is eliminated by the homomorphic analysis method, and the information belonging to the excitation part is obtained, and then the calculation of the pitch frequency is performed, such as inversion.
- Spectral method or can be realized by a hybrid method, that is, first extract the signal channel model parameters, then use it to filter the signal to obtain a sound source sequence, and finally use the autocorrelation method or the average amplitude difference method to calculate the pitch frequency; calculate
- the pitch frequency feature of the audio segment may be implemented by selecting an appropriate method according to an actual application scenario, which is not limited in this embodiment.
- the calculation of the audio frame feature of the audio segment may be implemented by selecting an appropriate method according to an actual application scenario, which is not limited in this embodiment.
- the frame when framing the audio clips in each text-audio pair, the frame can be processed according to a fixed frame length, such as 32ms or 64ms, and the specific frame length can be set according to actual needs. , this implementation does not make any limitation here.
- the starting pitch frequency and Start frame energy, end pitch frequency and end frame energy based on this, first lift the audio clips in each text-audio pair, and perform frame processing on each audio clip to obtain five sets of audio frames, corresponding to the text-audio pairs respectively.
- the audio features of the audio clips in each text-audio pair will be calculated in advance, and the attribute information of the audio clips in each text-audio pair will be analyzed in the attribute dimension.
- a target text-audio pair corresponds to a spliced text-audio pair
- a text-audio pair with a better effect after splicing can be selected in combination with the audio features as the splicing text-audio pair, so as to improve the quality of the sample data.
- the target text-audio pair and the splicing text-audio pair will be screened out in the text-audio pair according to the audio features for subsequent follow-up.
- the splicing process is performed to obtain sample data that meets the writing requirements.
- the specific implementation methods are steps S1042 to S1052 as shown in FIG. 4 .
- Step S1042 integrating the audio segment, text segment and audio feature of each text-audio pair in the plurality of text-audio pairs, obtaining a text-audio package corresponding to each text-audio pair, and writing the segment database;
- Step S1044 selecting any text audio package as the target text audio package in the fragment database, and determining the text audio pair in the target text audio package as the target text audio pair;
- Step S1046 selecting text audio packages other than the target text audio package in the fragment database to form a set of text audio packages to be screened;
- Step S1048 determining the text-audio pair of each to-be-screened text-audio package included in the to-be-screened text-audio package set as the to-be-screened text-audio pair;
- Step S1050 based on the audio features of the audio clips of the target text-audio pair and the audio features of the audio clips of the to-be-screened text-audio pair, screen out the spliced text-audio package from the text-audio package set to be screened;
- Step S1052 taking the text-audio pair in the spliced text-audio package as the spliced-text-audio pair.
- the text-audio package specifically refers to a set composed of text-audio pairs written into a fragment database and their corresponding text features
- the fragment database specifically refers to temporarily storing text fragments, audio fragments and their corresponding text-audio pairs in text-audio pairs.
- the database of audio features of the audio feature after obtaining the plurality of text-audio pairs, since it will take a certain amount of time to screen the associated splicing text-audio pairs for the target text-audio pairs, the text-audio package can be written into
- For the segment database when splicing processing is required, text and audio pairs can be extracted from the segment database for subsequent splicing processing.
- the to-be-screened text-audio packages contained in the to-be-screened text-audio package set specifically refer to other text-audio packages in the fragment database except the target text-audio package.
- the to-be-screened text and audio packages are That is, the text-audio pair contained in the to-be-screened text-audio package; the spliced text-audio package specifically refers to the text-audio package to which the text-audio pair that can be spliced with the target text-audio package belongs.
- any text audio package is selected from the fragment database as the target text audio package, and the text audio pair contained in the target text audio package is extracted as the target text audio pair.
- other text and audio packets except the target text and audio packets are selected as the text and audio packets to be screened, and a set of text and audio packets to be screened is formed.
- the matching degree between the target text-audio pair and each text-audio pair to be screened can be calculated, and then the matching degree is selected.
- the text-audio package to which the higher to-be-screened text-audio pair belongs can be used as the splicing text-audio package, that is, the text-audio pair in the splicing text-audio package is used as the splicing text-audio pair corresponding to the target text-audio pair, It is used for subsequent splicing of the two to obtain sample data that meets the requirements of writing to the training database.
- the splicing text-audio package can be obtained in the following manner, so that the text-audio pair in the splicing text-audio package is used as the splicing text-audio pair for use. Subsequent splicing with the target text-audio pair to obtain sample data that satisfies writing into the training database.
- the specific implementation method is as follows:
- the to-be-screened text-audio package to which the to-be-screened text-audio pair whose characteristic distance is smaller than the preset distance threshold belongs is determined as the spliced text-audio package.
- the first audio feature is the audio feature of the audio segment in the target text-audio pair
- the second audio feature is the audio feature of the audio segment in the text-audio pair to be screened.
- the The feature distance specifically refers to a numerical value for evaluating the degree of matching between text-audio pairs. The larger the feature distance is, the lower the matching degree between the text-audio pairs is. On the contrary, the smaller the feature distance is, indicating that the The higher the match between text-audio pairs.
- the first audio feature can be determined according to the first audio feature. and the second audio feature, calculate the feature distance between the target text-audio pair and each to-be-screened text-audio pair, then select the to-be-screened text-audio pair whose feature distance is less than the preset distance threshold as the splicing text-audio Yes, it can be used for subsequent splicing processing.
- L represents the feature distance
- F0 e represents the end pitch frequency feature of the audio segment in the target text-audio pair
- F0 s represents the starting pitch frequency feature of the audio segment in the text-audio pair to be screened
- E e represents the target text-audio mid-range audio
- E s represents the start audio frame feature of the audio segment in the text-audio pair to be screened.
- the text-audio pairs and their corresponding audio features can be integrated into a text-audio package (TP 1 to TP 5 ) . ) is written into the segment database D, so that text audio packets can be selected for splicing processing during subsequent splicing.
- select text audio package TP 1 as target text audio package in fragment database D determine that text audio pair TA 1 contained in text audio package TP 1 is the target text audio pair at this time;
- select text audio pair simultaneously Packages TP 2 , TP 3 , TP 4 , and TP 5 are used as text-audio packages to be screened, then the text-audio pairs TA 2 , TA 3 , TA 4 , and TA 5 in each to-be-screened text-audio package are used as text-audio pairs to be screened,
- the text-audio pair to be screened of LT is regarded as a spliced text-audio pair that can be spliced with the target text-audio pair TA 1 , and the characteristic distances L 1 , L 3 and L 4 are determined to be less than the distance threshold LT through the comparison results, which further indicates that the target text is being spliced
- the audio pair TA 1 is spliced with the text audio pair TA 2 , TA 4 and TA 5 to be screened, it can be ensured that the timbre and rhythm of each other are relatively close, so as to satisfy the requirement that higher quality sample data can be spliced later.
- the text-audio pairs TA 2 , TA 4 and TA 5 to be screened can be spliced with the target text-audio pair TA 1 , then the text-audio pairs TA 2 , TA 4 and TA 5 are determined as the spliced text of the target text-audio pair TA 1 audio pair.
- the target text-audio pairs can be used as backward text-audio pairs, and the text-audio pairs to be screened can be used as forward text-audio pairs, Then calculate the feature distance between each other.
- the spliced-text-audio pair will be screened in combination with the audio features, so as to realize the filtered text.
- the audio pair and the target text-audio pair are close to each other in terms of timbre, rhythm and other attributes, and the to-be-detected text-audio pair that meets the usage requirements can be spliced out in the follow-up, so as to realize the expansion of the training database for the downstream business. use.
- Step S106 splicing the target text-audio pair and the spliced text-audio pair into a to-be-detected text-audio pair, and detecting the to-be-detected text-audio pair.
- the target text-audio pair corresponding to the target text-audio pair based on the audio feature further, splicing the target text-audio pair and the spliced-text-audio pair to obtain The text-audio pair to be detected, and because the text-audio pair to be detected is spliced from two text-audio pairs, in order to be able to further ensure that the text-audio pairs written in the training database have quality assurance (splicing).
- the text-audio pair to be detected before writing to the training database, can also be detected to detect whether the audio segment in the text-audio pair to be detected is clear, and the text segment Whether the length is appropriate, etc., so as to obtain a text-audio pair with better quality and write it into the training database.
- the target text-audio pair may meet the requirements for writing into the training database, that is, the target text-audio pair can be written into the training database without being spliced with other text-audio pairs, so in order to improve the richness of the training database
- the specific implementation is as follows:
- the target sampling information specifically refers to the number of sampling bits and the sampling frequency when randomly sampling the audio clips in the target text-audio pair.
- the sampling frequency refers to a The number of sampling times of audio clips in seconds. The higher the sampling frequency, the more realistic and natural the restoration of the audio clips will be. On the contrary, the lower the sampling frequency, the more unreal and unnatural the restoration of the audio clips will be;
- the target text information specifically refers to the The length information, character quantity information, etc.
- the preset detection condition specifically refers to detecting whether the audio fragment and the text fragment meet the conditions for writing into the training database, which can be in the text-audio pair in the text-audio pair.
- the preset detection conditions write it into the training database, or write it into the training database when the audio clip or text clip in the text-audio pair meets the preset detection conditions the training database.
- step S106 may be performed, and then the spliced text-audio pairs to be detected are detected, so as to achieve balanced writing of the audio clips and text clips in the text-audio pairs in the training database, It is ensured that the text and audio segments in the training database are similar or identical in form (audio length, text length or audio energy, etc.), which
- the text-audio pair TA 1 from the text-audio pairs TA 1 to TA 5 as the target text-audio pair, further, at this time, the first audio segment (0s) in the text-audio pair TA 1 is selected.
- the target text-audio pair can be detected, so as to avoid omission of the text-audio pair that meets the requirements for writing into the training database, thereby improving the richness of the training database.
- each text-audio pair contains a text segment and an audio segment, it is necessary to splicing the audio segment while splicing the text segment to generate the described audio segment.
- the text audio segment to be detected in this embodiment, the specific implementation is as follows:
- the to-be-detected text-audio pair is formed based on the to-be-detected text segment and the to-be-detected audio segment.
- first extract the target text segment and the target audio segment in the target text-audio pair simultaneously extract the spliced text segment and the spliced audio segment in the spliced text-audio pair, and then extract the target text segment and the spliced audio segment
- the text fragments are spliced into text fragments to be detected, and the target audio fragments and the spliced audio fragments are spliced into audio fragments to be detected; finally, the to-be-detected text fragments and the to-be-detected audio fragments can be formed into the to-be-detected audio fragments. Detect text-audio pairs.
- the specific implementation is as follows Said:
- the text-audio pair to be detected is written into the training database.
- the splicing process will be performed on the target text-audio pair and the splicing text-audio pair, that is, extracting the target
- the first audio segment (0s ⁇ 3s) and the first text segment (I watched) in the text-audio pair TA 1 , and the second audio segment (3s ⁇ 4s) and the second text in the spliced text-audio pair TA 2 are extracted at the same time segment (one field), extract the fourth frequency segment (6s ⁇ 8s) and the fourth text segment (football) in the spliced text-audio pair TA 4 , and extract the fifth audio segment (8s ⁇ 10s) in the spliced text-audio pair TA 5 ) and the fifth text fragment (match).
- first audio clip and the second audio clip are spliced to obtain the first audio clip to be detected (length is 4s), and the first audio clip and the fourth audio clip are spliced to obtain the second audio clip to be detected ( The length is 5s), and the first audio clip and the fifth audio clip are spliced to obtain the third audio clip to be detected (length is 5s); at the same time, the first text clip and the second text clip are spliced to obtain the first to-be-detected audio clip.
- the first audio segment to be detected and the first text segment to be detected are combined into the first text-audio pair to be detected, the second audio segment to be detected and the The two to-be-detected text segments are combined into a second to-be-detected text-audio pair, and the third to-be-detected audio segment and the third to-be-detected text segment are combined into a third to-be-detected text-to-audio pair.
- the above-mentioned three text-audio pairs to be detected are further detected.
- the text-audio pairs written in the training database are used as sample data. Based on this, random sampling is performed on the to-be-detected audio segments in each to-be-detected text-audio pair between [0, 1], and it is determined that the first to-be-detected text-audio pair is centered.
- the sampling result of the first to-be-detected audio segment is U 1
- the sampling result of the second to-be-detected audio segment in the second to-be-detected text-audio pair is determined to be U 2
- the third to-be-detected text-audio pair is determined to be the third to-be-detected audio segment
- the sampling result is U 3 ; at the same time, it is determined that the text length of the first text segment to be detected in the first text audio pair to be detected is X 1
- the text length of the second text audio pair to be detected in the second text audio pair to be detected is determined to be X 1 2.
- Determine the text length of the third to-be-detected text segment in the third to-be-detected text-audio pair as X 3 .
- sampling results U 1 , U 2 and U 3 are greater than the preset sampling result Ut, and whether the text lengths X 1 , X 2 and X 3 are less than the preset text length Xt, and it is determined according to the judgment results that the sampling result U 2 is greater than The preset sampling result Ut, and the text length X2 is less than the preset text length Xt, and the sampling result U3 is greater than the preset sampling result Ut, and the text length X3 is less than the preset text length Xt, that is, the second text-audio pair to be detected and the third text-audio pair to be detected is written into the training database T, then the second text-audio pair to be detected (audio 5s, text "I watched football"), and the third text-to-audio pair to be detected (audio 5s, text "I watched the game”) is written into the training database T as sample data for subsequent training of the speech synthesis model.
- the text-audio pairs to be detected are detected through the audio dimension and the text dimension, so that the text-audio pairs written in the training database all meet the writing requirements, which effectively improves the performance in the training database. the quality of the sample data.
- Step S108 in the case that the to-be-detected text-audio pair satisfies a preset detection condition, write the to-be-detected text-audio pair into a training database.
- the text-audio pair to be detected meets the preset detection conditions, it is indicated that the text-audio pair to be detected meets the requirements for writing into the training database, and the text-audio pair to be detected is used as sample data It is enough to write into the training database, so that when training the speech synthesis model later, sample data that meets the training requirements can be extracted from the training database, so as to improve the prediction accuracy of the trained speech synthesis model.
- the number of data pieces written to the training database can be limited, that is, the number of data pieces written to the training database can be limited.
- the text-audio pairs to be detected of the preset detection conditions it is detected whether the number of text-audio pairs in the training database is less than or equal to the preset data volume threshold.
- the text-audio pairs that meet the preset detection conditions can be written into the training database. If the value is greater than that, it means that the training database cannot continue to write the text-audio pairs, and the subsequent processing of splicing the text-audio pairs can be stopped.
- the text-audio pairs in the training database can be used as sample data (sample text-audio pairs) to train the speech synthesis model in the downstream business.
- sample text-audio pairs sample text-audio pairs
- a speech synthesis model is trained based on the sample text segment and the sample audio segment to obtain a target speech synthesis model.
- a large number of sample text-audio pairs can be extracted from the training database, and then the speech is compared based on the sample text fragments and sample audio fragments in the sample text-audio pairs.
- the synthesis model is trained until a speech synthesis model that satisfies the training stop condition is obtained, and it is stored as the target speech synthesis model so that text can be converted into audio in the speech synthesis scene. For example, if the text is "I like to watch football games", input the text into the speech synthesis model for processing, and then the audio corresponding to the text can be obtained, and the processing of converting the text into speech can be realized.
- the multi-degree spliced text-audio pairs corresponding to the spliced text-audio pairs can be selected from the plurality of text-audio pairs according to the audio characteristics, and then the post-splicing detection processing is performed. , until the text-audio pairs that meet the requirements for writing into the training database are obtained.
- the splicing detection process is continuously performed, the obtained text-audio pairs to be detected may not meet the requirements for writing into the training database.
- a stop condition can be set. When the number of times of splicing reaches a certain condition, the processing of the text-audio pair can be stopped, and it can be discarded.
- the specific implementation method is as follows:
- the multi-degree spliced text-audio pair specifically refers to a text-audio pair that can be spliced with the spliced text-audio pair; based on this, in the case that the to-be-detected text-audio pair does not meet the preset detection condition, Explain that the target text-audio pair and the text-audio pair to be detected after splicing the spliced text-audio pair do not meet the requirements for writing in the training database.
- the multi-degree to-be-detected text-audio pair can be used as the to-be-detected text-audio pair, and the multi-degree spliced text-audio pair can be used as the spliced text-audio pair, and the process of performing the screening of the multi-degree spliced text-audio pair can be returned, It is sufficient to obtain a text-audio pair that meets the requirements for writing into the training database, or to discard the text-audio pair until the condition for stopping splicing is reached.
- the obtained first text-audio pair to be detected does not meet the requirements for writing into the training database T
- the first to-be-detected audio segment in the first to-be-detected text-audio pair is composed of the first audio segment and the second to-be-detected audio segment. It is composed of audio clips
- the first text clip to be detected is composed of the first text clip and the second text clip, so the third text-audio pair TA 3 that has the possibility of splicing with the second text-audio pair TA 2 can be selected as the multiplicity.
- the multi-degree to-be-detected audio segment in the multi-degree to-be-detected text-audio pair consists of the first audio segment, the second audio segment and the third audio segment, and the multi-degree to-be-detected text segment in the multi-degree to-be-detected text-audio pair is determined.
- the multi-duty text-audio pair to be detected is (audio fragment 6s, the text fragment "I saw a wonderful scene"), and the multi-duty waiting Detecting the text-audio pair to be detected, if the multi-degree to-be-detected text-audio pair satisfies the preset detection condition, then the multi-degree to-be-detected text-audio pair can be written into the training database T, if the multi-degree to-be-detected text-audio pair does not satisfy Preset detection conditions, you can then select a text-audio pair with the third text-audio pair TA 3 that has the possibility of splicing and perform splicing and detection processing, or discard the multiple text-audio pairs to be detected, and select other text-audio pairs to carry out the above-mentioned processing. It is enough to obtain sample data that satisfies writing to the training database T.
- This specification provides a sample generation method, after acquiring multiple text-audio pairs, calculating the audio feature of the audio segment of each text-audio pair in the multiple text-audio pairs, and generating the The target text-audio pair and the spliced text-audio pair corresponding to the target text-audio pair are screened out from the text-audio pairs, and then the target text-audio pair and the spliced-text-audio pair are spliced into the to-be-detected text-audio pair, and Detecting the to-be-detected text-audio pair, and writing the to-be-detected text-audio pair into the training database when the to-be-detected text-audio pair satisfies a preset detection condition, to achieve in the sample data preparation stage, High-quality sample data that meets the needs of downstream business use can be obtained by splicing, which saves the cost of resource consumption in the data preparation stage, and the amount of sample data written into the training database after splicing is large, which effectively solves the problem.
- the sample generation method is further described below by taking the application of the sample generation method provided in this specification in a speech synthesis scenario as an example with reference to FIG. 5 .
- 5 shows a processing flow chart of a sample generation method applied in a speech synthesis scenario provided by an embodiment of this specification, which specifically includes the following steps:
- Step S502 acquiring the target text and the audio corresponding to the target text.
- This embodiment provides a sample generation method applied in a speech synthesis scenario to solve the above problem.
- Step S504 preprocess the audio to obtain target audio, and convert the target text into a phoneme sequence.
- Step S506 aligning the phoneme sequence with the target audio, obtaining a phoneme audio file according to the alignment processing result, and determining the segmentation position of the phoneme audio file.
- Step S508 segment the phoneme audio file according to the segmentation positions to obtain multiple phoneme audio pairs, and determine the text segment corresponding to the phoneme segment of each phoneme audio pair in the multiple phoneme audio pairs based on the target text.
- Step S510 Generate a plurality of text-audio pairs according to the text segments corresponding to the phoneme segments in each phoneme-audio pair and the audio segments in each phoneme-audio pair.
- Step S512 extracting the audio segment of each text-audio pair in the plurality of text-audio pairs, and performing frame segmentation processing on the audio segment of each text-audio pair to obtain an audio frame set of each text-audio pair.
- Step S514 based on the audio frames included in the audio frame set of each text-audio pair in the plurality of text-audio pairs, calculate the fundamental frequency feature and the audio frame feature of the audio segment of each text-audio pair.
- Step S516 Integrate the audio segment, text segment, fundamental frequency feature and audio frame feature of each text-audio pair in the multiple text-audio pairs, obtain a text-audio package corresponding to each text-audio pair, and write the segment database.
- Step S518, select any text-audio package in the segment database as the target text-audio package, and determine the text-audio pair in the target text-audio package as the target text-audio pair.
- Step S520 selecting text audio packages other than the target text audio package in the segment database to form a set of text audio packages to be screened.
- Step S522 Determine the text-audio pair of each to-be-screened text-audio package included in the to-be-screened text-audio package set as the to-be-screened text-audio pair.
- Step S524 Determine the pitch frequency feature and audio frame feature of the audio clip of the target text-audio pair according to the target text-audio package, and determine the pitch frequency feature and audio frame feature of the audio clip of the text-audio pair to be screened according to the text-audio package to be screened.
- Step S526 Calculate the feature distance based on the pitch frequency feature and audio frame feature of the audio segment of the target text-audio pair, and the pitch frequency feature and audio frame feature of the audio segment of the text-audio pair to be screened.
- Step S530 taking the text-audio pair in the spliced text-audio package as the spliced-text-audio pair.
- Step S532 extracting the target text segment and the target audio segment in the target text-audio pair, and the spliced text segment and the spliced audio segment in the spliced text-audio pair.
- Step S534 splicing the target text segment and the spliced text segment into a text segment to be detected, and splicing the target audio segment and the spliced audio segment into an audio segment to be detected.
- Step S536 compose a text-audio pair to be detected based on the text segment to be detected and the audio segment to be detected.
- Step S538 Perform sampling processing on the to-be-detected audio segment in the to-be-detected text-audio pair to obtain the to-be-detected sampling information, and determine to-be-detected text information of the to-be-detected text segment in the to-be-detected text-audio pair.
- Step S540 in the case that the sample information to be detected and the text information to be detected both meet the preset detection conditions, write the text-audio pair to be detected into the training database.
- This specification provides a sample generation method, which can achieve high-quality sample data that meets the needs of downstream business use by splicing in the sample data preparation stage, saves the cost of resource consumption in the data preparation stage, and writes the data after splicing.
- the sample data in the training database has a large amount of data, which effectively solves the problem of poor speech synthesis effect caused by the small amount of downstream business sample data and the uneven distribution of audio lengths in the sample data, thereby improving the business processing efficiency of the downstream business.
- FIG. 5 shows a schematic structural diagram of a sample generating apparatus provided by an embodiment of the present specification.
- the device includes:
- an acquisition module 602 configured to acquire a plurality of text-audio pairs, wherein each text-audio pair includes a text segment and an audio segment;
- the calculation module 604 is configured to calculate the audio feature of the audio segment of each text-audio pair in the plurality of text-audio pairs, and filter out the target text-audio pair and The spliced text-audio pair corresponding to the target text-audio pair;
- the splicing module 606 is configured to splicing the target text-audio pair and the splicing text-audio pair into a to-be-detected text-audio pair, and to detect the to-be-detected text-audio pair;
- the writing module 608 is configured to write the to-be-detected text-audio pair into a training database when the to-be-detected text-audio pair meets a preset detection condition.
- the obtaining module 602 is further configured to:
- target text and the audio corresponding to the target text perform preprocessing on the audio to obtain target audio, and convert the target text into a phoneme sequence; align the phoneme sequence and the target audio, and according to The alignment processing results generate the plurality of text-audio pairs.
- the obtaining module 602 is further configured to:
- the computing module 604 is further configured to:
- the computing module 604 is further configured to:
- Integrate the audio clips, text clips and audio features of each text-audio pair in the multiple text-audio pairs obtain a text-audio package corresponding to each text-audio pair, and write into the fragment database; in the fragment database Select any text audio package as the target text audio package, and determine the text audio pair in the target text audio package as the target text audio pair; based on the text in the fragment database except the target text audio package.
- the audio package and the audio feature determine the spliced text-audio package, and the text-audio pair in the spliced-text-audio package is used as the spliced-text-audio pair.
- the computing module 604 is further configured to:
- Selecting text and audio packets other than the target text and audio packets in the fragment database to form a set of text and audio packets to be screened; is a text-audio pair to be screened; based on the audio features of the audio clips of the target text-audio pair and the audio features of the audio clips of the text-audio pair to be screened, the splicing is screened out in the text-audio package set to be screened Text Audio Pack.
- the computing module 604 is further configured to:
- the sample generating device further includes:
- a sampling module configured to perform sampling processing on the audio clips in the target text-audio pair to obtain target sampling information, and determine the target text information of the text clips in the target text-audio pair; determine the target sampling information and the target sampling information Whether the target text information satisfies the preset detection condition;
- the stitching module 606 is run.
- the target text-audio pair is written into the training database.
- the splicing module 606 is further configured to:
- the splicing module 606 is further configured to:
- the writing module 608 is further configured to:
- the text-audio pair to be detected is written into the training database.
- the sample generating device further includes:
- a screening module configured to screen out the corresponding abundances of the spliced text-audio pairs from the plurality of text-audio pairs according to the audio features when the to-be-detected text-audio pairs do not meet the preset detection conditions Splicing text-audio pairs; splicing the multi-degree splicing text-audio pairs and the to-be-detected text-audio pairs into multi-degree to-be-detected text-audio pairs, and judging whether the multi-degree to-be-detected text-audio pairs satisfy the preset detection condition;
- the sample generating device further includes:
- a training module configured to extract sample text-audio pairs in the training database, the sample text-audio pairs including sample text fragments and sample audio fragments; based on the sample text fragments and the sample audio fragments, a speech synthesis model Perform training to obtain the target speech synthesis model.
- the sample generation device after acquiring multiple text-audio pairs, calculates the audio feature of the audio segment of each text-audio pair in the multiple text-audio pairs, and calculates the audio feature of the audio segment in the multiple text-audio pairs according to the audio feature.
- the target text-audio pair and the spliced text-audio pair corresponding to the target text-audio pair are screened out from the text-audio pairs, and then the target text-audio pair and the spliced-text-audio pair are spliced into the to-be-detected text-audio pair, and Detecting the to-be-detected text-audio pair, and writing the to-be-detected text-audio pair into the training database when the to-be-detected text-audio pair satisfies a preset detection condition, to achieve in the sample data preparation stage, High-quality sample data that meets the needs of downstream business use can be obtained by splicing, which saves the cost of resource consumption in the data preparation stage, and the amount of sample data written into the training database after splicing is large, which effectively solves the problem.
- the small amount of sample data for downstream services and the uneven distribution of audio lengths in the sample data result in poor speech synthesis effects, thereby improving the business processing efficiency of downstream services.
- the above is a schematic solution of a sample generating apparatus according to this embodiment. It should be noted that the technical solution of the sample generation device and the technical solution of the above-mentioned sample generation method belong to the same concept, and the details that are not described in detail in the technical solution of the sample generation device can be referred to the description of the technical solution of the above-mentioned sample generation method. .
- FIG. 7 shows a structural block diagram of a computing device 700 according to an embodiment of the present specification.
- Components of the computing device 700 include, but are not limited to, memory 710 and processor 720 .
- the processor 720 is connected to the memory 710 through the bus 730, and the database 750 is used for storing data.
- Computing device 700 also includes access device 740 that enables computing device 700 to communicate via one or more networks 760 .
- networks include a public switched telephone network (PSTN), a local area network (LAN), a wide area network (WAN), a personal area network (PAN), or a combination of communication networks such as the Internet.
- Access device 740 may include one or more of any type of network interface (eg, network interface card (NIC)), wired or wireless, such as an IEEE 502.11 wireless local area network (WLAN) wireless interface, World Interoperability for Microwave Access ( Wi-MAX) interface, Ethernet interface, Universal Serial Bus (USB) interface, cellular network interface, Bluetooth interface, Near Field Communication (NFC) interface, and the like.
- NIC network interface card
- computing device 700 and other components not shown in FIG. 7 may also be connected to each other, such as through a bus. It should be understood that the structural block diagram of the computing device shown in FIG. 7 is only for the purpose of example, rather than limiting the scope of the present specification. Those skilled in the art can add or replace other components as required.
- Computing device 700 may be any type of stationary or mobile computing device, including mobile computers or mobile computing devices (eg, tablet computers, personal digital assistants, laptop computers, notebook computers, netbooks, etc.), mobile phones (eg, smart phones) ), wearable computing devices (eg, smart watches, smart glasses, etc.) or other types of mobile devices, or stationary computing devices such as desktop computers or PCs.
- mobile computers or mobile computing devices eg, tablet computers, personal digital assistants, laptop computers, notebook computers, netbooks, etc.
- mobile phones eg, smart phones
- wearable computing devices eg, smart watches, smart glasses, etc.
- desktop computers or PCs e.g., desktop computers or PCs.
- Computing device 700 may also be a mobile or stationary server.
- the processor 720 is configured to execute the following computer-executable instructions:
- each text-audio pair contains a text segment and an audio segment
- the text-audio pair to be detected satisfies a preset detection condition
- the text-audio pair to be detected is written into a training database.
- the above is a schematic solution of a computing device according to this embodiment. It should be noted that the technical solution of the computing device and the technical solution of the above-mentioned sample generation method belong to the same concept, and the details not described in detail in the technical solution of the computing device can be referred to the description of the technical solution of the above-mentioned sample generation method.
- An embodiment of the present specification further provides a computer-readable storage medium, which stores computer instructions, which, when executed by a processor, are used for:
- each text-audio pair contains a text segment and an audio segment
- the text-audio pair to be detected satisfies the preset detection condition
- the text-audio pair to be detected is written into a training database.
- the above is a schematic solution of a computer-readable storage medium of this embodiment. It should be noted that the technical solution of the storage medium and the technical solution of the above-mentioned sample generation method belong to the same concept, and the details not described in detail in the technical solution of the storage medium can be referred to the description of the technical solution of the above-mentioned sample generation method.
- the computer instructions include computer program code, which may be in source code form, object code form, an executable file, some intermediate form, or the like.
- the computer-readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium, etc.
- ROM Read-Only Memory
- RAM Random Access Memory
- electric carrier signal telecommunication signal and software distribution medium, etc.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
Description
Claims (16)
- 一种样本生成方法,其特征在于,包括:获取多个文本音频对,其中每个文本音频对中包含文本片段和音频片段;计算所述多个文本音频对中每个文本音频对的音频片段的音频特征,并根据所述音频特征在所述多个文本音频对中筛选出目标文本音频对和所述目标文本音频对对应的拼接文本音频对;将所述目标文本音频对和所述拼接文本音频对拼接为待检测文本音频对,并对所述待检测文本音频对进行检测;在所述待检测文本音频对满足预设检测条件的情况下,将所述待检测文本音频对写入训练数据库。
- 根据权利要求1所述的样本生成方法,其特征在于,所述获取多个文本音频对,包括:获取目标文本以及所述目标文本对应的音频;对所述音频进行预处理获得目标音频,并将所述目标文本转换为音素序列;将所述音素序列与所述目标音频进行对齐处理,并根据对齐处理结果生成所述多个文本音频对。
- 根据权利要求2所述的样本生成方法,其特征在于,所述根据对齐处理结果生成所述多个文本音频对,包括:根据对齐处理结果得到音素音频文件,并确定所述音素音频文件的切分位置;按照所述切分位置对所述音素音频文件进行切分,获得多个音素音频对,其中每个音素音频对中包含音素片段和音频片段;基于所述目标文本确定所述多个音素音频对中的每个音素音频对的音素片段对应的文本片段;根据每个音素音频对中音素片段对应的文本片段,以及每个音素音频对中的音频片段生成所述多个文本音频对。
- 根据权利要求1所述的样本生成方法,其特征在于,所述计算所述多个文本音频对中每个文本音频对的音频片段的音频特征,包括:提取所述多个文本音频对中每个文本音频对的音频片段,并对每个文本音频对的音频片段进行分帧处理,获得每个文本音频对的音频帧集合;基于所述多个文本音频对中每个文本音频对的音频帧集合包含的音频帧,计算每个文本音频对的音频片段的基音频率特征和音频帧特征;根据每个文本音频对的音频片段的所述基音频率特征和所述音频帧特征,确定每个文本音频对的音频片段的所述音频特征。
- 根据权利要求1所述的样本生成方法,其特征在于,所述根据所述音频特征在所述多个文本音频对中筛选出目标文本音频对和所述目标文本音频对对应的拼接文本音频对,包括:将所述多个文本音频对中每个文本音频对的音频片段、文本片段和音频特征进行整合,获得每个文本音频对对应的文本音频包,并写入片段数据库;在所述片段数据库中选择任意一个文本音频包作为目标文本音频包,并将所述目标文本音频包中的文本音频对确定为所述目标文本音频对;基于所述片段数据库中除所述目标文本音频包外的文本音频包和所述音频特征确定拼接文本音频包,并将所述拼接文本音频包中的文本音频对作为所述拼接文本音频对。
- 根据权利要求5所述的样本生成方法,其特征在于,所述基于所述片段数据库中除所述目标文本音频包外的文本音频包和所述音频特征确定拼接文本音频包,包括:在所述片段数据库中选择除所述目标文本音频包外的文本音频包组成待筛选文本音频包集合;将所述待筛选文本音频包集合中包含的各个待筛选文本音频包的文本音频对确定为待筛选文本音频对;基于所述目标文本音频对的音频片段的音频特征和所述待筛选文本音频对的音频片段的音频特征,在所述待筛选文本音频包集合中筛选出所述拼接文本音频包。
- 根据权利要求6所述的样本生成方法,其特征在于,所述基于所述目标文本音频对的音频片段的音频特征和所述待筛选文本音频对的音频片段的音频特征,在所述待筛选文本音频包集合中筛选出所述拼接文本音频包,包括:根据所述目标文本音频包确定所述目标文本音频对的音频片段的第一音频特征,以及根据所述待筛选文本音频包确定所述待筛选文本音频对的音频片段的第二音频特征;计算所述第一音频特征和所述第二音频特征之间的特征距离;将所述特征距离小于预设距离阈值的待筛选文本音频对所属的待筛选文本音频包确定为所述拼接文本音频包。
- 根据权利要求1所述的样本生成方法,其特征在于,所述将所述目标 文本音频对和所述拼接文本音频对拼接为待检测文本音频对,并对所述待检测文本音频对进行检测步骤执行之前,还包括:对所述目标文本音频对中的音频片段进行采样处理获得目标采样信息,以及确定所述目标文本音频对中的文本片段的目标文本信息;判断所述目标采样信息和所述目标文本信息是否满足所述预设检测条件;若否,则执行将所述目标文本音频对和所述拼接文本音频对拼接为待检测文本音频对,并对所述待检测文本音频对进行检测的步骤。
- 根据权利要求8所述的样本生成方法,其特征在于,若所述判断所述采样信息和所述文本信息是否满足所述预设检测条件的判断结果为是,则将所述目标文本音频对写入所述训练数据库。
- 根据权利要求1所述的样本生成方法,其特征在于,所述将所述目标文本音频对和所述拼接文本音频对拼接为待检测文本音频对,包括:提取所述目标文本音频对中的目标文本片段和目标音频片段,以及所述拼接文本音频对中的拼接文本片段和拼接音频片段;将所述目标文本片段和所述拼接文本片段拼接为待检测文本片段,以及将所述目标音频片段和所述拼接音频片段拼接为待检测音频片段;基于所述待检测文本片段和所述待检测音频片段组成所述待检测文本音频对。
- 根据权利要求10所述的样本生成方法,其特征在于,所述对所述待检测文本音频对进行检测,包括:对所述待检测音频片段进行采样处理获得待检测采样信息,以及确定所述待检测文本片段的待检测文本信息;基于所述预设检测条件对所述待检测采样信息和所述待检测文本信息进行检测;相应的,所述在所述待检测文本音频对满足预设检测条件的情况下,将所述待检测文本音频对写入训练数据库,包括:在所述待检测采样信息和所述待检测文本信息均满足所述预设检测条件的情况下,将所述待检测文本音频对写入所述训练数据库。
- 根据权利要求1所述的样本生成方法,其特征在于,所述将所述目标文本音频对和所述拼接文本音频对拼接为待检测文本音频对,并对所述待检测文本音频对进行检测步骤执行之后,还包括:在所述待检测文本音频对未满足预设检测条件的情况下,根据所述音频特 征在所述多个文本音频对中筛选出所述拼接文本音频对对应的多度拼接文本音频对;将所述多度拼接文本音频对和所述待检测文本音频对拼接为多度待检测文本音频对,并判断所述多度待检测文本音频对是否满足所述预设检测条件;若是,则将所述多度待检测文本音频对写入所述训练数据库;若否,则将所述多度拼接文本音频对作为所述拼接文本音频对,以及将所述多度待检测文本音频对作为所述待检测文本音频对,并执行所述根据所述音频特征在所述多个文本音频对中筛选出所述拼接文本音频对对应的多度拼接文本音频对步骤。
- 根据权利要求1所述的样本生成方法,其特征在于,所述将所述待检测文本音频对写入训练数据库步骤执行之后,还包括:在所述训练数据库中提取样本文本音频对,所述样本文本音频对中包含样本文本片段和样本音频片段;基于所述样本文本片段和所述样本音频片段对语音合成模型进行训练,获得目标语音合成模型。
- 一种样本生成装置,其特征在于,包括:获取模块,被配置为获取多个文本音频对,其中每个文本音频对中包含文本片段和音频片段;计算模块,被配置为计算所述多个文本音频对中每个文本音频对的音频片段的音频特征,并根据所述音频特征在所述多个文本音频对中筛选出目标文本音频对和所述目标文本音频对对应的拼接文本音频对;拼接模块,被配置为将所述目标文本音频对和所述拼接文本音频对拼接为待检测文本音频对,并对所述待检测文本音频对进行检测;写入模块,被配置为在所述待检测文本音频对满足预设检测条件的情况下,将所述待检测文本音频对写入训练数据库。
- 一种计算设备,其特征在于,包括:存储器和处理器;所述存储器用于存储计算机可执行指令,所述处理器用于执行所述计算机可执行指令,以实现下述方法:获取多个文本音频对,其中每个文本音频对中包含文本片段和音频片段;计算所述多个文本音频对中每个文本音频对的音频片段的音频特征,并根据所述音频特征在所述多个文本音频对中筛选出目标文本音频对和所述目标 文本音频对对应的拼接文本音频对;将所述目标文本音频对和所述拼接文本音频对拼接为待检测文本音频对,并对所述待检测文本音频对进行检测;在所述待检测文本音频对满足预设检测条件的情况下,将所述待检测文本音频对写入训练数据库。
- 一种计算机可读存储介质,其存储有计算机指令,其特征在于,该指令被处理器执行时实现权利要求1至13任意一项所述样本生成方法的步骤。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
KR1020237017827A KR20230079503A (ko) | 2020-11-20 | 2021-11-12 | 샘플 생성 방법 및 장치 |
US18/253,717 US11810546B2 (en) | 2020-11-20 | 2021-11-12 | Sample generation method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011309190.7 | 2020-11-20 | ||
CN202011309190.7A CN112133277B (zh) | 2020-11-20 | 2020-11-20 | 样本生成方法及装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022105693A1 true WO2022105693A1 (zh) | 2022-05-27 |
Family
ID=73852445
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/130459 WO2022105693A1 (zh) | 2020-11-20 | 2021-11-12 | 样本生成方法及装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11810546B2 (zh) |
KR (1) | KR20230079503A (zh) |
CN (1) | CN112133277B (zh) |
WO (1) | WO2022105693A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118366478A (zh) * | 2024-06-19 | 2024-07-19 | 中国科学院自动化研究所 | 基于音素间隔序列的生成音频鉴别与生成区域定位方法 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112133277B (zh) * | 2020-11-20 | 2021-02-26 | 北京猿力未来科技有限公司 | 样本生成方法及装置 |
CN112686041B (zh) * | 2021-01-06 | 2024-06-04 | 北京猿力未来科技有限公司 | 一种拼音标注方法及装置 |
CN112863530B (zh) * | 2021-01-07 | 2024-08-27 | 广州欢城文化传媒有限公司 | 一种声音作品的生成方法和装置 |
CN113241054B (zh) * | 2021-05-10 | 2023-03-21 | 北京声智科技有限公司 | 语音平滑处理模型生成方法、语音平滑处理方法及装置 |
CN113658581B (zh) * | 2021-08-18 | 2024-03-01 | 北京百度网讯科技有限公司 | 声学模型的训练、语音处理方法、装置、设备及存储介质 |
CN114694629B (zh) * | 2022-04-08 | 2024-09-10 | 思必驰科技股份有限公司 | 用于语音合成的语音数据扩增方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060229876A1 (en) * | 2005-04-07 | 2006-10-12 | International Business Machines Corporation | Method, apparatus and computer program providing a multi-speaker database for concatenative text-to-speech synthesis |
CN105336322A (zh) * | 2015-09-30 | 2016-02-17 | 百度在线网络技术(北京)有限公司 | 多音字模型训练方法、语音合成方法及装置 |
CN109817198A (zh) * | 2019-03-06 | 2019-05-28 | 广州多益网络股份有限公司 | 用于语音合成的多发音训练方法、语音合成方法与装置 |
CN110310626A (zh) * | 2019-05-23 | 2019-10-08 | 平安科技(深圳)有限公司 | 语音训练数据生成方法、装置、设备及可读存储介质 |
CN112133277A (zh) * | 2020-11-20 | 2020-12-25 | 北京猿力未来科技有限公司 | 样本生成方法及装置 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6618699B1 (en) * | 1999-08-30 | 2003-09-09 | Lucent Technologies Inc. | Formant tracking based on phoneme information |
US7983919B2 (en) * | 2007-08-09 | 2011-07-19 | At&T Intellectual Property Ii, L.P. | System and method for performing speech synthesis with a cache of phoneme sequences |
CA2841883A1 (en) * | 2011-07-25 | 2013-01-31 | Frank RUDZICZ | System and method for acoustic transformation |
US9881631B2 (en) * | 2014-10-21 | 2018-01-30 | Mitsubishi Electric Research Laboratories, Inc. | Method for enhancing audio signal using phase information |
GB2544070B (en) * | 2015-11-04 | 2021-12-29 | The Chancellor Masters And Scholars Of The Univ Of Cambridge | Speech processing system and method |
US11961589B2 (en) * | 2017-11-28 | 2024-04-16 | Grail, Llc | Models for targeted sequencing |
US11170761B2 (en) * | 2018-12-04 | 2021-11-09 | Sorenson Ip Holdings, Llc | Training of speech recognition systems |
CN110428811B (zh) * | 2019-09-17 | 2021-09-07 | 北京声智科技有限公司 | 一种数据处理方法、装置及电子设备 |
CN110689879B (zh) * | 2019-10-10 | 2022-02-25 | 中国科学院自动化研究所 | 端到端语音转写模型的训练方法、系统、装置 |
CN111126001A (zh) * | 2019-11-19 | 2020-05-08 | 深圳追一科技有限公司 | 文字标注方法、装置、设备及存储介质 |
US11514948B1 (en) * | 2020-01-09 | 2022-11-29 | Amazon Technologies, Inc. | Model-based dubbing to translate spoken audio in a video |
CN111862942B (zh) * | 2020-07-28 | 2022-05-06 | 思必驰科技股份有限公司 | 普通话和四川话的混合语音识别模型的训练方法及系统 |
-
2020
- 2020-11-20 CN CN202011309190.7A patent/CN112133277B/zh active Active
-
2021
- 2021-11-12 WO PCT/CN2021/130459 patent/WO2022105693A1/zh active Application Filing
- 2021-11-12 KR KR1020237017827A patent/KR20230079503A/ko active IP Right Grant
- 2021-11-12 US US18/253,717 patent/US11810546B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20060229876A1 (en) * | 2005-04-07 | 2006-10-12 | International Business Machines Corporation | Method, apparatus and computer program providing a multi-speaker database for concatenative text-to-speech synthesis |
CN105336322A (zh) * | 2015-09-30 | 2016-02-17 | 百度在线网络技术(北京)有限公司 | 多音字模型训练方法、语音合成方法及装置 |
CN109817198A (zh) * | 2019-03-06 | 2019-05-28 | 广州多益网络股份有限公司 | 用于语音合成的多发音训练方法、语音合成方法与装置 |
CN110310626A (zh) * | 2019-05-23 | 2019-10-08 | 平安科技(深圳)有限公司 | 语音训练数据生成方法、装置、设备及可读存储介质 |
CN112133277A (zh) * | 2020-11-20 | 2020-12-25 | 北京猿力未来科技有限公司 | 样本生成方法及装置 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118366478A (zh) * | 2024-06-19 | 2024-07-19 | 中国科学院自动化研究所 | 基于音素间隔序列的生成音频鉴别与生成区域定位方法 |
Also Published As
Publication number | Publication date |
---|---|
US20230317052A1 (en) | 2023-10-05 |
CN112133277B (zh) | 2021-02-26 |
KR20230079503A (ko) | 2023-06-07 |
CN112133277A (zh) | 2020-12-25 |
US11810546B2 (en) | 2023-11-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022105693A1 (zh) | 样本生成方法及装置 | |
US10878803B2 (en) | Speech conversion method, computer device, and storage medium | |
WO2020024690A1 (zh) | 语音标注方法、装置及设备 | |
US20180349495A1 (en) | Audio data processing method and apparatus, and computer storage medium | |
US8731936B2 (en) | Energy-efficient unobtrusive identification of a speaker | |
US8165874B2 (en) | System, method, and program product for processing speech ratio difference data variations in a conversation between two persons | |
US8326610B2 (en) | Producing phonitos based on feature vectors | |
BR112016025110B1 (pt) | Gerenciamento de perfil de voz e geração de sinal de fala | |
CN103377651B (zh) | 语音自动合成装置及方法 | |
CN111433847A (zh) | 语音转换的方法及训练方法、智能装置和存储介质 | |
US9058384B2 (en) | System and method for identification of highly-variable vocalizations | |
CN108877779B (zh) | 用于检测语音尾点的方法和装置 | |
Mittal et al. | Study of characteristics of aperiodicity in Noh voices | |
CN104252872A (zh) | 歌词生成方法和智能终端 | |
JP2012108451A (ja) | 音声処理装置および方法、並びにプログラム | |
CN109584888A (zh) | 基于机器学习的鸣笛识别方法 | |
CN112420015A (zh) | 一种音频合成方法、装置、设备及计算机可读存储介质 | |
CN106098081A (zh) | 声音文件的音质识别方法及装置 | |
Wei et al. | RMVPE: A robust model for vocal pitch estimation in polyphonic music | |
CN114302301B (zh) | 频响校正方法及相关产品 | |
CN107025902B (zh) | 数据处理方法及装置 | |
CN114373478A (zh) | 歌曲音频标注与对齐模型训练方法、设备及存储介质 | |
CN109495786B (zh) | 视频处理参数信息的预配置方法、装置及电子设备 | |
CN111341298A (zh) | 一种语音识别算法评分方法 | |
CN115206345B (zh) | 基于时频结合的音乐人声分离方法、装置、设备及介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21893840 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 202317034541 Country of ref document: IN |
|
ENP | Entry into the national phase |
Ref document number: 20237017827 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21893840 Country of ref document: EP Kind code of ref document: A1 |