WO2021012734A1 - Audio separation method and apparatus, electronic device and computer-readable storage medium - Google Patents

Audio separation method and apparatus, electronic device and computer-readable storage medium Download PDF

Info

Publication number
WO2021012734A1
WO2021012734A1 PCT/CN2020/086757 CN2020086757W WO2021012734A1 WO 2021012734 A1 WO2021012734 A1 WO 2021012734A1 CN 2020086757 W CN2020086757 W CN 2020086757W WO 2021012734 A1 WO2021012734 A1 WO 2021012734A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice
speech
preset
text
voiceprint
Prior art date
Application number
PCT/CN2020/086757
Other languages
French (fr)
Chinese (zh)
Inventor
高立志
Original Assignee
深圳壹账通智能科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳壹账通智能科技有限公司 filed Critical 深圳壹账通智能科技有限公司
Publication of WO2021012734A1 publication Critical patent/WO2021012734A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/14Speech classification or search using statistical models, e.g. Hidden Markov Models [HMMs]
    • G10L15/142Hidden Markov Models [HMMs]
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/04Training, enrolment or model building
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0272Voice signal separating
    • G10L21/028Voice signal separating using properties of sound source

Definitions

  • This application relates to the field of speech processing, and in particular to an audio separation method, device, electronic equipment, and computer-readable storage medium.
  • the recognized text contains the content of multiple people's speech.
  • the inventor realizes that it is impossible to distinguish who said the text, which affects the recognition effect and accuracy.
  • the first aspect of the present application provides an audio separation method, the method including:
  • Extract voiceprint feature data from the filtered voice input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result.
  • the second aspect of the application provides an audio separation device, which includes:
  • Acquisition module used to acquire voice
  • a noise filtering module for performing noise filtering on the voice
  • the voice separation module is used to extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and classify the same voiceprint feature data according to the classification result
  • the corresponding speech is encoded and stored as a separate speech file to separate the speech;
  • the text recognition module is used to recognize the speech after the separation process to obtain the recognized text of the speech.
  • a third aspect of the present application provides an electronic device including a processor configured to implement the audio separation method when executing a computer program stored in a memory, and the audio separation method includes:
  • Extract voiceprint feature data from the filtered voice input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result.
  • a fourth aspect of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the audio separation method is implemented, and the audio separation method includes:
  • Extract voiceprint feature data from the filtered voice input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result.
  • This application uses a preset voice classification model to separate the filtered voice according to the voiceprint characteristics of the voice, and to recognize the voice after the separation process to obtain the recognized text of the voice, which can identify different people in the voice
  • the speech text of spoken words improves the accuracy of speech recognition.
  • Fig. 1 is a flowchart of an audio separation method in an embodiment of the present application.
  • Fig. 2 is a schematic diagram of an application environment of an audio separation method in an embodiment of the present application.
  • Fig. 3 is a schematic diagram of a page audio separation device in an embodiment of the present application.
  • Fig. 4 is a schematic diagram of an electronic device in an embodiment of the present application.
  • the audio separation method of the present application is applied to one or more electronic devices.
  • the electronic device is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. Its hardware includes, but is not limited to, a microprocessor and an application specific integrated circuit (ASIC) , Field-Programmable Gate Array (FPGA), Digital Processor (Digital Signal Processor, DSP), embedded equipment, etc.
  • ASIC application specific integrated circuit
  • FPGA Field-Programmable Gate Array
  • DSP Digital Processor
  • embedded equipment etc.
  • the electronic device may be a computing device such as a desktop computer, a notebook computer, a tablet computer, and a cloud server.
  • the device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
  • Fig. 1 is a flowchart of an audio separation method in an embodiment of the present application. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted.
  • the audio separation method specifically includes the following steps:
  • Step S11 Acquire voice.
  • FIG. 2 shows an application environment diagram of an audio separation method in an embodiment of this application.
  • the method is applied in a terminal device 1.
  • the terminal device 1 includes a voice acquisition unit 11.
  • the terminal device 1 acquires voice through the voice acquiring unit 11.
  • the voice acquisition unit 11 may include, but is not limited to, an electric, capacitive, piezoelectric, electromagnetic, and semiconductor microphone.
  • the terminal device 1 can receive a voice sent by an external device 2 communicatively connected with the terminal device 1.
  • the terminal device 1 obtains voice from the storage device of the terminal device 1.
  • Step S12 Perform noise filtering on the voice.
  • the terminal device 1 filters the environmental noise in the voice. For example, when the terminal device 1 acquires voice through the voice acquiring unit 11 from a noisy environment, the voice includes environmental noise of the environment where the terminal device 1 is located. In a specific embodiment, the terminal device 1 detects whether the voice decibel of the acquired voice is within a preset decibel threshold range, and when the voice decibel of the voice is not within the preset decibel threshold range, the terminal device 1 The speech is noise filtered.
  • the preset decibel threshold can be set as required, and in this embodiment, the preset decibel threshold range can be set to 70-80db.
  • the terminal device 1 selects the voice information whose voice decibel exceeds the first decibel threshold as the environmental noise from the voice, and deletes the environmental noise whose voice decibel exceeds the first decibel threshold, so as to realize the Voice noise filtering.
  • the first decibel threshold can be set as required, for example, the first decibel threshold can be set to 80db.
  • the terminal device 1 filters the environmental noise in the speech through a deep learning voiceprint noise reduction method.
  • the voiceprint noise reduction method through deep learning to filter the environmental noise in the voice includes: establishing a machine learning and deep learning model; establishing a voiceprint recognition model; and passing the acquired voice through Machine learning and deep learning models are used for learning, and the environmental noise in the speech is recognized and distinguished; the speech after recognition by the machine learning and deep learning models is filtered to remove the speech that does not belong to human speech audio Ambient noise, get the voice that has undergone preliminary screening; determine whether the voice that has undergone the preliminary screening reaches the preset threshold; when it is determined that the voice that has undergone the preliminary screening reaches the preset threshold, the voice that has reached the preset threshold and the voiceprint
  • the recognition model is compared and extracted, the voice frequency and the spectral image that are consistent with the voiceprint recognition model are retained, the voices that are not consistent with the voiceprint recognition model are eliminated, and the voice processed by voiceprint noise reduction is obtained.
  • the terminal device 1 uses a large amount of obtained speech environment audio and a large amount of specific person's speech audio to build a machine learning and deep learning model; all the speech environment audio and specific person's speech audio are converted into the form of pop chart It is imported into the terminal device 1, and through a large amount of repeated training, the environmental noise (environmental sound) is distinguished from the voice pop chart of a specific person's speech through machine learning and deep learning.
  • each individual’s unique voiceprint can be observed using the grammar.
  • Acquire the voiceprint of a specific speaker perform feature extraction operation on the voiceprint of the person, use the existing voiceprint of the specific speaker to build a voiceprint spectrogram, and perform the features on the voiceprint spectrogram
  • a voiceprint recognition model that only belongs to the person can be established.
  • the modeling methods of voiceprint recognition models are divided into three types: text-related, text-independent and text prompt. Since the input voice content cannot be determined, the text-independent type is selected for voiceprint modeling, thereby obtaining the voiceprint recognition model.
  • the text irrelevant includes: GMM-UBM, GMM-SVM, GMM-UBM-LF, i-vector/PLDA).
  • GMM-UBM is selected to build the voiceprint modeling of the speaker confirmation system.
  • the MFCC feature vector is extracted, and after a large amount of human voiceprint data is repeatedly trained and MAP adaptive Process and confirm the decision to obtain a human voiceprint recognition model with a high voiceprint recognition rate.
  • the MFCC feature vector extraction process includes input sample speech, the sample speech pre-emphasis, framing, and windowing, the processed sample speech is subjected to Fourier transform, Mel frequency filtering, and Log logarithmic energy , Calculate the cepstrum of the sample and output the MFCC image.
  • the terminal device 1 filters out white noise in the speech.
  • white noise refers to noise with equal noise energy contained in frequency bands of equal bandwidth within a wider frequency range.
  • the white noise in the speech can be removed by the wavelet transform algorithm or the Kalman filter algorithm.
  • Step S13 Extract voiceprint feature data from the filtered voice, input the voiceprint feature data into the preset voice classification model for classification to obtain a classification result, and assign the same voiceprint feature data to the corresponding
  • the speech is encoded and stored as a separate speech file, and the speech is processed separately.
  • the voiceprint feature can be used to verify the speaker's identity and distinguish the speaker's voice.
  • the voiceprint feature data includes, but is not limited to, Mel cepstrum coefficient MFCC, perceptual linear prediction coefficient PLP, depth feature Deep Feature, and energy regularization spectrum coefficient PNCC.
  • the terminal device 1 uses wavelet transform technology to extract the Mel cepstrum coefficient MFCC, the perceptual linear prediction coefficient PLP, the depth feature Deep Feature, or the energy normalized spectral coefficient PNCC, etc.
  • the voiceprint feature data is input into the preset voice classification model to obtain the classification result.
  • the voice corresponding to the same voiceprint feature data is encoded and stored as a separate voice file.
  • the preset speech classification model includes at least one of the following: a vector machine model, a random model, and a neural network model.
  • the terminal device uses a pre-trained preset voice classification model to determine the category of the voiceprint feature data according to the extracted voiceprint feature data.
  • the categories of the voiceprint feature data include: a first voiceprint feature category, a second voiceprint feature category, and a third voiceprint feature category.
  • the training process of the preset voice classification model includes:
  • the voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample are randomly divided into a training set with a first preset ratio and a verification set with a second preset ratio, and the training set is used to train the Preset a speech classification model, and use the verification set to verify the accuracy of the preset speech classification model after training.
  • the training samples in the training set with different voiceprint characteristics are distributed to different folders.
  • the training samples of the first voiceprint feature category are distributed to the first folder
  • the training samples of the second voiceprint feature category are distributed to the second folder
  • the training samples of the third voiceprint feature category are distributed to the third folder.
  • Folder extract the training samples of the first preset ratio (for example, 70%) from different folders as the total training samples to train the preset voice classification model, and take the remaining second samples from different folders.
  • a preset proportion (for example, 30%) of training samples is used as a total test sample to verify the accuracy of the preset voice classification model after training.
  • the training ends, and the trained preset voice classification model is used as a classifier to identify the category of the voiceprint feature data; if the accuracy rate is less than When the accuracy rate is preset, the number of positive samples and the number of negative samples are increased to retrain the preset voice classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
  • the terminal device 1 is also used to perform enhanced amplification processing on the voice corresponding to the same voiceprint feature data; and encode the voice after the enhanced amplification processing. That is, the terminal device 1 separates the voices with different voiceprint features from the voice according to the voiceprint features, respectively strengthens the amplification process of the separated voices, and encodes the voices corresponding to the same voiceprint features. Stored as a separate voice file, and stored separately as a voice file.
  • Step S14 Recognizing the speech after the separation process to obtain the recognized text of the speech.
  • the terminal device 1 converts the separated speech into text through speech recognition, and uses it as the initial speech recognition text; and matches the initial speech recognition text with a preset text database to obtain the matched Speech recognition text.
  • the specific process of the terminal device 1 converting the separated speech into text through speech recognition includes:
  • the grammar rule is the Viterbi algorithm.
  • the voice to be recognized is "Hello", which is transformed into a 39-dimensional acoustic feature vector after feature extraction, and multiple corresponding sub-words /n//i//h//ao are obtained through multiple HMM phoneme models /, Splice multiple sub-words into characters according to the preset pronunciation dictionary, such as you, Nepal; good, number. Decode by Viterbi algorithm to obtain the optimal sequence "Hello" and output the text.
  • At least two text databases may be preset, for example, a first text database and a second text database.
  • the first text database can be dedicated to storing multiple modal particles, such as “um”, “ah”, “yeah”, etc. The modal particles have nothing to do with the content of the meeting and easily affect the readability of the speech converted into text.
  • the second text database can be dedicated to storing multiple professional words and their corresponding pinyin, such as "feature vector”, “feature matrix”, “tensor analysis”, etc. The professional words are more complex, so they tend to appear in batches during the process of speech recognition error.
  • a third text database can also be preset according to the actual situation, specifically for storing sentences such as names or place names. This article does not make specific restrictions on the number of pre-set databases and corresponding contents of this article.
  • the terminal device 1 matching the initial voice recognition text with a preset text database specifically includes:
  • the matching the initial speech recognition text with a preset first text database includes: determining whether there is a first word in the initial speech recognition text that matches a word in the preset first text database; When it is determined that there is a first word that matches a word in the preset first text database in the initial voice recognition text, the first word that matches in the initial voice recognition text is processed.
  • the processing of the matching first word in the initial speech recognition text may further include: judging whether the matching first word is based on the pre-trained modal particle model based on the deep learning network Is the modal particle to be deleted; when it is determined that the first matching word is the modal particle to be deleted, the first matching word in the initial speech recognition text is eliminated; when the matching first word is determined When a word is not a modal particle to be deleted, the first matching word in the initial speech recognition text is retained.
  • the initial speech recognition text is "this is very easy to use”
  • the modal word "this” is stored in the preset first text database
  • the initial speech recognition text is matched with the preset first text database to determine The matched word is "this”, and then judge whether the matched first word "this” is the modal particle to be deleted according to the pre-trained modal particle model based on the deep learning network.
  • the network's modal particle model determines that the matched first word "this” does not belong to the modal particle to be deleted in "this is very useful”, then the first matching word in the initial speech recognition text is retained, The first matching result obtained is "This is pretty easy to use”.
  • the initial speech recognition text is "this, we are going to have a meeting”
  • the first text database is preset to store the modal word "this”
  • the initial speech recognition text is matched with the preset first text database to determine The matched word is "this”, and then judge whether the matched first word "this” is the modal particle to be deleted according to the pre-trained modal particle model based on the deep learning network.
  • the network's modal particle model determines that the first matching word "this” belongs to the modal particle to be deleted in "this, we are going to have a meeting”, and then the first matching word in the initial speech recognition text is eliminated.
  • the first matching result obtained was "We are going to have a meeting.”
  • the matching the first matching result with a preset second text database includes:
  • the first matching result is "this is an original giant earthquake", and the words in the first matching result are converted to the first pinyin as "zhe shi yige yuanshi Juzhen";
  • the second text database is preset to store professional words "Matrix” and the corresponding second pinyin "juzheng”, when it is determined that there is a second pinyin identical to the first pinyin in the preset second text database, the word “juzheng” corresponding to the second pinyin " "Matrix” is extracted as the word corresponding to the first pinyin "juzheng", and the second matching result obtained is "This is an original matrix".
  • This application converts the separated speech into text through speech recognition technology, as the initial speech recognition text; and matches the initial speech recognition text with a preset text database to obtain the matched speech recognition text, which can be recognized
  • the voice text of the words spoken by different people in the voice is convenient for the recorder to gather information.
  • FIG. 3 is a schematic diagram of an audio separation device 40 in an embodiment of the application.
  • the audio separation device 40 runs in an electronic device.
  • the audio separation device 40 may include multiple functional modules composed of program code segments.
  • the program code of each program segment in the audio separation device 40 can be stored in a memory and executed by at least one processor to perform the audio separation function.
  • the audio separation device 40 can be divided into multiple functional modules according to the functions it performs.
  • the audio separation device 40 may include an acquisition module 401, a noise filtering module 402, a speech separation module 403, and a text recognition module 404.
  • the module referred to in this application refers to a series of computer program segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory. In some embodiments, the functions of each module will be detailed in subsequent embodiments.
  • the acquiring module 401 is used for acquiring voice.
  • the acquisition module 401 acquires voice through the voice acquisition unit 11.
  • the voice acquisition unit 11 may include, but is not limited to, an electric, capacitive, piezoelectric, electromagnetic, and semiconductor microphone.
  • the acquisition module 401 can receive the voice sent by the external device 2 communicatively connected with the terminal device 1.
  • the acquiring module 401 acquires the voice from the storage device of the terminal device 1.
  • the noise filtering module 402 is configured to perform noise filtering on the speech.
  • the noise filtering module 402 filters the environmental noise in the speech.
  • the noise filtering module 402 detects whether the voice decibel of the acquired voice is within a preset decibel threshold range, and when the voice decibel of the voice is not within the preset decibel threshold range, the noise filtering module 402 Perform noise filtering on the voice.
  • the preset decibel threshold can be set as required. In this embodiment, the preset decibel threshold range can be set to 70-80db.
  • the noise filtering module 402 selects the voice information whose voice decibel exceeds the first decibel threshold as the environmental noise from the voice, and deletes the environmental noise whose voice decibel exceeds the first decibel threshold, so as to realize the The noise filtering of the speech.
  • the first decibel threshold can be set as required, for example, the first decibel threshold can be set to 80db.
  • the noise filtering module 402 filters the environmental noise in the speech by using a deep learning voiceprint noise reduction method.
  • the voiceprint noise reduction method through deep learning to filter the environmental noise in the voice includes: establishing a machine learning and deep learning model; establishing a voiceprint recognition model; and passing the acquired voice through Machine learning and deep learning models are used for learning, and the environmental noise in the speech is recognized and distinguished; the speech after recognition by the machine learning and deep learning models is filtered to remove the speech that does not belong to human speech audio Ambient noise, get the voice that has undergone preliminary screening; determine whether the voice that has undergone the preliminary screening reaches the preset threshold; when it is determined that the voice that has undergone the preliminary screening reaches the preset threshold, the voice that has reached the preset threshold and the voiceprint
  • the recognition model is compared and extracted, the voice frequency and the spectral image that are consistent with the voiceprint recognition model are retained, the voices that are not consistent with the voiceprint recognition model are eliminated, and the voice processed by voiceprint noise reduction is obtained.
  • the noise filtering module 402 uses a large amount of obtained speech environment audio and a large amount of specific person's speech audio to build a machine learning and deep learning model; converts all the speech environment audio and specific person's speech audio into a pop chart.
  • the format is imported into the terminal device 1, and through a lot of repeated training, the environmental noise (environmental sound) and the voice pop chart of a specific person's speech are distinguished through machine learning and deep learning.
  • each individual’s unique voiceprint can be observed using the grammar.
  • Acquire the voiceprint of a specific speaker perform feature extraction operation on the voiceprint of the person, use the existing voiceprint of the specific speaker to build a voiceprint spectrogram, and perform the features on the voiceprint spectrogram
  • a voiceprint recognition model that only belongs to the person can be established.
  • the modeling methods of voiceprint recognition models are divided into three types: text-related, text-independent and text prompt. Since the input voice content cannot be determined, the text-independent type is selected for voiceprint modeling, thereby obtaining the voiceprint recognition model.
  • the text irrelevant includes: GMM-UBM, GMM-SVM, GMM-UBM-LF, i-vector/PLDA).
  • GMM-UBM is selected to build the voiceprint modeling of the speaker confirmation system.
  • the MFCC feature vector is extracted, and after a large amount of human voiceprint data is repeatedly trained and MAP adaptive Process and confirm the decision to obtain a human voiceprint recognition model with a high voiceprint recognition rate.
  • the MFCC feature vector extraction process includes input sample speech, the sample speech pre-emphasis, framing, and windowing, the processed sample speech is subjected to Fourier transform, Mel frequency filtering, and Log logarithmic energy , Calculate the cepstrum of the sample and output the MFCC image.
  • the noise filtering module 402 filters white noise in the speech.
  • white noise refers to noise with equal noise energy contained in frequency bands of equal bandwidth within a wider frequency range.
  • the white noise in the speech can be removed by the wavelet transform algorithm or the Kalman filter algorithm.
  • the voice separation module 403 is configured to use a preset voice classification model to perform separation processing on the filtered voice according to the voiceprint features of the voice.
  • the voice separation module 403 uses a preset voice classification model to separate the filtered voice according to the voiceprint features of the voice, including: extracting voiceprint feature data from the filtered voice, and dividing the voiceprint
  • the feature data is input into the preset voice classification model for classification to obtain the classification result.
  • the voice corresponding to the same voiceprint feature data is coded and stored as a separate voice file, thus realizing the separation processing of the voice .
  • the voiceprint feature can be used to verify the speaker's identity and distinguish the speaker's voice.
  • the voiceprint feature data includes, but is not limited to, Mel cepstrum coefficient MFCC, perceptual linear prediction coefficient PLP, depth feature Deep Feature, and energy regularization spectrum coefficient PNCC.
  • the voice separation module 403 uses wavelet transform technology to extract the Mel cepstrum coefficient MFCC, the perceptual linear prediction coefficient PLP, the depth feature Deep Feature or the energy normalized spectral coefficient PNCC, etc.
  • Voiceprint feature data and input the voiceprint feature data into the preset voice classification model to obtain the classification result according to the Mel cepstrum coefficient MFCC, the perceptual linear prediction coefficient PLP, the depth feature Deep Feature or the energy normalized spectrum coefficient PNCC.
  • the voice corresponding to the same voiceprint feature data is encoded and stored as a separate voice file.
  • the preset speech classification model includes at least one of the following: a vector machine model, a random model, and a neural network model.
  • the terminal device uses a pre-trained preset voice classification model to determine the category of the voiceprint feature data according to the extracted voiceprint feature data.
  • the categories of the voiceprint feature data include: a first voiceprint feature category, a second voiceprint feature category, and a third voiceprint feature category.
  • the training process of inputting the voiceprint feature data into the preset voice classification model for classification to obtain the classification result includes:
  • the voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample are randomly divided into a training set with a first preset ratio and a verification set with a second preset ratio, and the training set is used to train the Preset a speech classification model, and use the verification set to verify the accuracy of the preset speech classification model after training.
  • the training samples in the training set with different voiceprint characteristics are distributed to different folders.
  • the training samples of the first voiceprint feature category are distributed to the first folder
  • the training samples of the second voiceprint feature category are distributed to the second folder
  • the training samples of the third voiceprint feature category are distributed to the third folder.
  • Folder extract the training samples of the first preset ratio (for example, 70%) from different folders as the total training samples to train the preset voice classification model, and take the remaining second samples from different folders.
  • a preset proportion (for example, 30%) of training samples is used as a total test sample to verify the accuracy of the preset voice classification model after training.
  • the training ends, and the trained preset voice classification model is used as a classifier to identify the category of the voiceprint feature data; if the accuracy rate is less than When the accuracy rate is preset, the number of positive samples and the number of negative samples are increased to retrain the preset voice classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
  • the voice separation module 403 is also used to perform enhanced amplification processing on the voice corresponding to the same voiceprint feature data; and encode the voice after the enhanced amplification processing. That is, the terminal device 1 separates the voices with different voiceprint features from the voice according to the voiceprint features, respectively strengthens the amplification process of the separated voices, and encodes the voices corresponding to the same voiceprint features. Stored as a separate voice file, and stored separately as a voice file.
  • the text recognition module 404 is configured to recognize the speech after the separation process to obtain the recognized text of the speech.
  • the text recognition module 404 converts the separated speech into text through speech recognition, as the initial speech recognition text; and matches the initial speech recognition text with a preset text database to obtain the matching Speech recognition text.
  • the specific process for the text recognition module 404 to convert the separated speech into text through speech recognition includes:
  • the grammar rule is the Viterbi algorithm.
  • the voice to be recognized is "Hello", which is transformed into a 39-dimensional acoustic feature vector after feature extraction, and multiple corresponding sub-words /n//i//h//ao are obtained through multiple HMM phoneme models /, Splice multiple sub-words into characters according to the preset pronunciation dictionary, such as you, Nepal; good, number. Decode by Viterbi algorithm to obtain the optimal sequence "Hello" and output the text.
  • At least two text databases may be preset, for example, a first text database and a second text database.
  • the first text database can be dedicated to storing multiple modal particles, such as “um”, “ah”, “yeah”, etc. The modal particles have nothing to do with the content of the meeting and easily affect the readability of the speech converted into text.
  • the second text database can be dedicated to storing multiple professional words and their corresponding pinyin, such as "feature vector”, “feature matrix”, “tensor analysis”, etc. The professional words are more complex, so they tend to appear in batches during the process of speech recognition error.
  • a third text database can also be preset according to the actual situation, specifically for storing sentences such as names or place names. This article does not make specific restrictions on the number of pre-set text databases and corresponding content.
  • the text recognition module 404 matching the initial speech recognition text with a preset text database specifically includes:
  • the matching the initial speech recognition text with a preset first text database includes: determining whether there is a first word in the initial speech recognition text that matches a word in the preset first text database; When it is determined that there is a first word that matches a word in the preset first text database in the initial voice recognition text, the first word that matches in the initial voice recognition text is processed.
  • the processing of the matching first word in the initial speech recognition text may further include: judging whether the matching first word is based on the pre-trained modal particle model based on the deep learning network Is the modal particle to be deleted; when it is determined that the first matching word is the modal particle to be deleted, the first matching word in the initial speech recognition text is eliminated; when the matching first word is determined When a word is not a modal particle to be deleted, the first matching word in the initial speech recognition text is retained.
  • the initial speech recognition text is "this is very easy to use”
  • the modal word "this” is stored in the preset first text database
  • the initial speech recognition text is matched with the preset first text database to determine The matched word is "this”, and then judge whether the matched first word "this” is the modal particle to be deleted according to the pre-trained modal particle model based on the deep learning network.
  • the network's modal particle model determines that the matched first word "this” does not belong to the modal particle to be deleted in "this is very useful”, then the first matching word in the initial speech recognition text is retained, The first matching result obtained is "This is pretty easy to use”.
  • the initial speech recognition text is "this, we are going to have a meeting”
  • the first text database is preset to store the modal word "this”
  • the initial speech recognition text is matched with the preset first text database to determine The matched word is "this”, and then judge whether the matched first word "this” is the modal particle to be deleted according to the pre-trained modal particle model based on the deep learning network.
  • the network's modal particle model determines that the first matching word "this” belongs to the modal particle to be deleted in "this, we are going to have a meeting”, and then the first matching word in the initial speech recognition text is eliminated.
  • the first matching result obtained was "We are going to have a meeting.”
  • the matching the first matching result with a preset second text database includes:
  • the first matching result is "this is an original giant earthquake", and the words in the first matching result are converted to the first pinyin as "zhe shi yige yuanshi Juzhen";
  • the second text database is preset to store professional words "Matrix” and the corresponding second pinyin "juzheng”, when it is determined that there is a second pinyin identical to the first pinyin in the preset second text database, the word “juzheng” corresponding to the second pinyin " "Matrix” is extracted as the word corresponding to the first pinyin "juzheng", and the second matching result obtained is "This is an original matrix".
  • This application converts the separated speech into text through speech recognition technology, as the initial speech recognition text; and matches the initial speech recognition text with a preset text database to obtain the matched speech recognition text, which can be recognized
  • the voice text of the words spoken by different people in the voice is convenient for the recorder to gather information.
  • FIG. 4 is a schematic diagram of a preferred embodiment of the electronic device 7 of this application.
  • the electronic device 7 includes a memory 71, a processor 72, and a computer program 73 that is stored in the memory 71 and can run on the processor 72.
  • the steps in the above audio separation method embodiment are implemented, such as steps S11 to S14 shown in FIG. 1. That is, the audio separation method includes: acquiring speech; performing noise filtering on the speech; extracting voiceprint feature data from the filtered voice, and inputting the voiceprint feature data into a preset voice classification model for classification to obtain a classification result According to the classification result, the voice corresponding to the same voiceprint feature data is encoded and stored as a separate voice file to separate the voice; and the voice after the separation is recognized to obtain the information of the voice Recognize the text.
  • the functions of the modules/units in the foregoing audio separation device embodiment are implemented, for example, the modules 401 to 404 in FIG. 3.
  • the computer program 73 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 71 and executed by the processor 72 to complete This application.
  • the one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 73 in the electronic device 7.
  • the computer program 73 can be divided into an acquisition module 401, a noise filtering module 402, a speech separation module 403, and a text recognition module 404 in FIG. 3.
  • the computer program 73 can be divided into an acquisition module 401, a noise filtering module 402, a speech separation module 403, and a text recognition module 404 in FIG. 3.
  • the second embodiment For specific functions of each module, refer to the second embodiment.
  • the electronic device 7 and the terminal device 1 are the same device.
  • the electronic device 7 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the schematic diagram is only an example of the electronic device 7 and does not constitute a limitation on the electronic device 7. It may include more or less components than those shown in the figure, or a combination of certain components, or different components. Components, for example, the electronic device 7 may also include input and output devices, network access devices, buses, and the like.
  • the so-called processor 72 may be a central processing module (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc.
  • the general-purpose processor can be a microprocessor or the processor 72 can also be any conventional processor, etc.
  • the processor 72 is the control center of the electronic device 7 and connects the entire electronic device 7 with various interfaces and lines. Parts.
  • the memory 71 may be used to store the computer program 73 and/or modules/units.
  • the processor 72 runs or executes the computer programs and/or modules/units stored in the memory 71 and calls the computer programs and/or modules/units stored in the memory 71.
  • the data in 71 realizes various functions of the electronic device 7 described above.
  • the memory 71 may mainly include a storage program area and a storage data area.
  • the storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may The data (such as audio data, phone book, etc.) created according to the use of the electronic device 7 is stored.
  • the memory 71 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), and a Secure Digital (SD) Card, Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • a non-volatile memory such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), and a Secure Digital (SD) Card, Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
  • the integrated module/unit of the electronic device 7 may be stored in a computer-readable storage medium, which may be non-easy. Loss of sex can also be volatile. Based on this understanding, this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program.
  • the computer program can be stored in a computer-readable storage medium.
  • the above audio separation method can be realized, which includes: acquiring speech; filtering the speech noise; extracting voiceprint feature data from the filtered voice, and inputting the voiceprint feature data Perform classification to a preset voice classification model to obtain a classification result, encode and store the voice corresponding to the same voiceprint feature data as a separate voice file according to the classification result, and perform separation processing on the voice; and The subsequent voice is recognized to obtain the recognized text of the voice.
  • the computer program includes computer program code
  • the computer program code may be in the form of source code, object code, executable file, or some intermediate forms.
  • the computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc.
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • electrical carrier signal telecommunications signal
  • software distribution media etc.
  • the content contained in the computer-readable medium can be appropriately added or deleted in accordance with the requirements of the legislation and patent practice in the jurisdiction.
  • the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
  • the functional modules in the various embodiments of the present application may be integrated in the same processing module, or each module may exist alone physically, or two or more modules may be integrated in the same module.
  • the above-mentioned integrated modules can be implemented in the form of hardware, or in the form of hardware plus software functional modules.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Quality & Reliability (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Probability & Statistics with Applications (AREA)
  • Machine Translation (AREA)

Abstract

An audio separation method and apparatus, an electronic device and a computer-readable storage medium. The method comprises: acquiring speech (S11); performing noise filtering on the speech (S12); extracting voiceprint feature data from the filtered speech, inputting the voiceprint feature data into a pre-set speech classification model for classification to obtain a classification result, encoding, according to the classification result, the speech corresponding to the same voiceprint feature data and storing same as a separate speech file, and performing separation processing on the speech (S13); and recognizing the speech after separation processing to acquire recognized text of the speech (S14). A pre-set speech classification model is used to perform separation processing on filtered speech according to the voiceprint features of the speech, and the speech after separation processing is recognized to acquire recognized text of the speech, so that the speech text of words spoken by different people in the speech can be recognized, thereby improving the accuracy of speech recognition.

Description

音频分离方法、装置、电子设备及计算机可读存储介质Audio separation method, device, electronic equipment and computer readable storage medium
本申请要求于2019年7月25日提交中国专利局、申请号为201910678465.5,发明名称为“音频分离方法、装置、电子设备及计算机可读存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application filed with the Chinese Patent Office on July 25, 2019, the application number is 201910678465.5, and the invention title is "audio separation method, device, electronic equipment, and computer-readable storage medium", and its entire content Incorporated in this application by reference.
技术领域Technical field
本申请涉及语音处理领域,具体涉及一种音频分离方法、装置、电子设备及计算机可读存储介质。This application relates to the field of speech processing, and in particular to an audio separation method, device, electronic equipment, and computer-readable storage medium.
背景技术Background technique
目前,通用的语音识别中,如果存在多个人讲话,识别出来的文字包含了多个人的说话内容,发明人意识到这样无法区分这些文字分别是谁说的,从而影响识别的效果和准确率。At present, in general speech recognition, if there are multiple people speaking, the recognized text contains the content of multiple people's speech. The inventor realizes that it is impossible to distinguish who said the text, which affects the recognition effect and accuracy.
发明内容Summary of the invention
鉴于以上内容,有必要提出一种音频分离方法、装置、电子设备及计算机可读存储介质提高语音识别的准确率。In view of the above, it is necessary to propose an audio separation method, device, electronic equipment, and computer-readable storage medium to improve the accuracy of speech recognition.
本申请的第一方面提供一种音频分离方法,所述方法包括:The first aspect of the present application provides an audio separation method, the method including:
获取语音;Get voice
对所述语音进行噪声过滤;Noise filtering the voice;
从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及Extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result. Store the voice as a separate voice file and separate the voice; and
对经过分离处理后的语音进行识别以获取所述语音的识别文本。Recognizing the speech after the separation process to obtain the recognized text of the speech.
申请的第二方面提供一种音频分离装置,所述装置包括:The second aspect of the application provides an audio separation device, which includes:
获取模块,用于获取语音;Acquisition module, used to acquire voice;
噪声过滤模块,用于对所述语音进行噪声过滤;A noise filtering module for performing noise filtering on the voice;
语音分离模块,用于从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及The voice separation module is used to extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and classify the same voiceprint feature data according to the classification result The corresponding speech is encoded and stored as a separate speech file to separate the speech; and
文本识别模块,用于对经过分离处理后的语音进行识别以获取所述语音的识别文本。The text recognition module is used to recognize the speech after the separation process to obtain the recognized text of the speech.
本申请的第三方面提供一种电子设备,所述电子设备包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现所述音频分离方法,所述音频分离方法包括:A third aspect of the present application provides an electronic device including a processor configured to implement the audio separation method when executing a computer program stored in a memory, and the audio separation method includes:
获取语音;Get voice
对所述语音进行噪声过滤;Noise filtering the voice;
从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及Extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result. Store the voice as a separate voice file and separate the voice; and
对经过分离处理后的语音进行识别以获取所述语音的识别文本。Recognizing the speech after the separation process to obtain the recognized text of the speech.
本申请的第四方面提供一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现所述音频分离方法,所述音频分离方法包括:A fourth aspect of the present application provides a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the audio separation method is implemented, and the audio separation method includes:
获取语音;Get voice
对所述语音进行噪声过滤;Noise filtering the voice;
从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及Extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result. Store the voice as a separate voice file and separate the voice; and
对经过分离处理后的语音进行识别以获取所述语音的识别文本。Recognizing the speech after the separation process to obtain the recognized text of the speech.
本申请通过利用预设语音分类模型对过滤后的语音按照语音的声纹特征做分离处理,及对经过分离处理后的语音进行识别以获取所述语音的识别文本,可以识别出语音中不同人说的话语的语音文本,提高了语音识别的准确率。This application uses a preset voice classification model to separate the filtered voice according to the voiceprint characteristics of the voice, and to recognize the voice after the separation process to obtain the recognized text of the voice, which can identify different people in the voice The speech text of spoken words improves the accuracy of speech recognition.
附图说明Description of the drawings
图1是本申请一实施方式中音频分离方法的流程图。Fig. 1 is a flowchart of an audio separation method in an embodiment of the present application.
图2是本申请一实施方式中音频分离方法的应用环境示意图。Fig. 2 is a schematic diagram of an application environment of an audio separation method in an embodiment of the present application.
图3是本申请一实施方式中页音频分离装置的示意图。Fig. 3 is a schematic diagram of a page audio separation device in an embodiment of the present application.
图4是本申请一实施方式中电子设备的示意图。Fig. 4 is a schematic diagram of an electronic device in an embodiment of the present application.
具体实施方式Detailed ways
为了能够更清楚地理解本申请的上述目的、特征和优点,下面结合附图和具体实施例对本申请进行详细描述。需要说明的是,在不冲突的情况下,本申请的实施例及实施例中的特征可以相互组合。In order to be able to understand the above objectives, features and advantages of the application more clearly, the application will be described in detail below with reference to the accompanying drawings and specific embodiments. It should be noted that the embodiments of the application and the features in the embodiments can be combined with each other if there is no conflict.
在下面的描述中阐述了很多具体细节以便于充分理解本申请,所描述的 实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。In the following description, many specific details are set forth in order to fully understand the present application. The described embodiments are only a part of the embodiments of the present application, rather than all the embodiments. Based on the embodiments in this application, all other embodiments obtained by those of ordinary skill in the art without creative work shall fall within the protection scope of this application.
除非另有定义,本文所使用的所有的技术和科学术语与属于本申请的技术领域的技术人员通常理解的含义相同。本文中在本申请的说明书中所使用的术语只是为了描述具体的实施例的目的,不是旨在于限制本申请。Unless otherwise defined, all technical and scientific terms used herein have the same meaning as commonly understood by those skilled in the technical field of this application. The terms used in the description of the application herein are only for the purpose of describing specific embodiments, and are not intended to limit the application.
优选地,本申请音频分离方法应用在一个或者多个电子设备中。所述电子设备是一种能够按照事先设定或存储的指令,自动进行数值计算和/或信息处理的设备,其硬件包括但不限于微处理器、专用集成电路(Application Specific Integrated Circuit,ASIC)、可编程门阵列(Field-Programmable Gate Array,FPGA)、数字处理器(Digital Signal Processor,DSP)、嵌入式设备等。Preferably, the audio separation method of the present application is applied to one or more electronic devices. The electronic device is a device that can automatically perform numerical calculation and/or information processing in accordance with pre-set or stored instructions. Its hardware includes, but is not limited to, a microprocessor and an application specific integrated circuit (ASIC) , Field-Programmable Gate Array (FPGA), Digital Processor (Digital Signal Processor, DSP), embedded equipment, etc.
所述电子设备可以是桌上型计算机、笔记本电脑、平板电脑及云端服务器等计算设备。所述设备可以与用户通过键盘、鼠标、遥控器、触摸板或声控设备等方式进行人机交互。The electronic device may be a computing device such as a desktop computer, a notebook computer, a tablet computer, and a cloud server. The device can interact with the user through a keyboard, a mouse, a remote control, a touch panel, or a voice control device.
实施例1Example 1
图1是本申请一实施方式中音频分离方法的流程图。根据不同的需求,所述流程图中步骤的顺序可以改变,某些步骤可以省略。Fig. 1 is a flowchart of an audio separation method in an embodiment of the present application. According to different needs, the order of the steps in the flowchart can be changed, and some steps can be omitted.
参阅图1所示,所述音频分离方法具体包括以下步骤:Referring to FIG. 1, the audio separation method specifically includes the following steps:
步骤S11、获取语音。Step S11: Acquire voice.
请参考图2,所示为本申请一实施方式中音频分离方法的应用环境图。本实施方式中,所述方法应用在一终端装置1中。所述终端装置1包括一语音获取单元11。所述终端装置1通过所述语音获取单元11获取语音。本实施方式中,所述语音获取单元11可以包括,但不限于电动式、电容式、压电式、电磁式、半导体式麦克风。在另一实施方式中,所述终端装置1可以接收与所述终端装置1通信连接的外部设备2发送的语音。在其他实施方式中,所述终端装置1从终端装置1的存储设备中获取语音。Please refer to FIG. 2, which shows an application environment diagram of an audio separation method in an embodiment of this application. In this embodiment, the method is applied in a terminal device 1. The terminal device 1 includes a voice acquisition unit 11. The terminal device 1 acquires voice through the voice acquiring unit 11. In this embodiment, the voice acquisition unit 11 may include, but is not limited to, an electric, capacitive, piezoelectric, electromagnetic, and semiconductor microphone. In another embodiment, the terminal device 1 can receive a voice sent by an external device 2 communicatively connected with the terminal device 1. In other embodiments, the terminal device 1 obtains voice from the storage device of the terminal device 1.
步骤S12、对所述语音进行噪声过滤。Step S12: Perform noise filtering on the voice.
在一实施方式中,所述终端装置1对所述语音中的环境噪声进行过滤。例如,当所述终端装置1从嘈杂的环境中通过所述语音获取单元11获取语音时,所述语音中包括了终端装置1所处环境的环境噪声。在一具体实施方式中,所述终端装置1检测获取的语音的语音分贝是否处于预设分贝阈值范围,当所述语音的语音分贝不在预设分贝阈值范围内时,则所述终端装置1对所述语音进行噪声过滤。所述预设分贝阈值可以根据需要进行设定,本实施方式中,所述预设分贝阈值范围可以设定为70-80db。所述终端装置1从所述语音中,选择语音分贝超过第一分贝阈值的语音信息作为所述环境噪声,并将语音分贝超过第一分贝阈值的所述环境噪声进行删除,如此实现对所述 语音的噪声过滤。本实施方式中,根据需要可以对所述第一分贝阈值进行设定,例如可将所述第一分贝阈值设定为80db。In one embodiment, the terminal device 1 filters the environmental noise in the voice. For example, when the terminal device 1 acquires voice through the voice acquiring unit 11 from a noisy environment, the voice includes environmental noise of the environment where the terminal device 1 is located. In a specific embodiment, the terminal device 1 detects whether the voice decibel of the acquired voice is within a preset decibel threshold range, and when the voice decibel of the voice is not within the preset decibel threshold range, the terminal device 1 The speech is noise filtered. The preset decibel threshold can be set as required, and in this embodiment, the preset decibel threshold range can be set to 70-80db. The terminal device 1 selects the voice information whose voice decibel exceeds the first decibel threshold as the environmental noise from the voice, and deletes the environmental noise whose voice decibel exceeds the first decibel threshold, so as to realize the Voice noise filtering. In this embodiment, the first decibel threshold can be set as required, for example, the first decibel threshold can be set to 80db.
在一实施方式中,所述终端装置1通过深度学习的声纹降噪方法对所述语音中的环境噪声进行过滤。在具体实施方法中,所述通过深度学习的声纹降噪方法对所述语音中的环境噪声进行过滤包括:建立机器学习及深度学习模型;建立声纹识别模型;将获取的所述语音通过机器学习及深度学习模型进行学习,对所述语音中的环境噪声进行识别区分;将经过所述机器学习及深度学习模型识别后的语音进行过滤,剔除掉所述语音中不属于人说话音频的环境噪声,得到经过初步筛查的语音;判断经过初步筛查的语音是否达到预设阈值;当确定经过初步筛查的语音达到预设阈值时,将达到预设阈值的语音与所述声纹识别模型进行对比提取,保留与所述声纹识别模型相符合的语音频率及语谱图像,剔除掉与所述声纹识别模型不符合的语音,得到声纹降噪处理的语音。In one embodiment, the terminal device 1 filters the environmental noise in the speech through a deep learning voiceprint noise reduction method. In a specific implementation method, the voiceprint noise reduction method through deep learning to filter the environmental noise in the voice includes: establishing a machine learning and deep learning model; establishing a voiceprint recognition model; and passing the acquired voice through Machine learning and deep learning models are used for learning, and the environmental noise in the speech is recognized and distinguished; the speech after recognition by the machine learning and deep learning models is filtered to remove the speech that does not belong to human speech audio Ambient noise, get the voice that has undergone preliminary screening; determine whether the voice that has undergone the preliminary screening reaches the preset threshold; when it is determined that the voice that has undergone the preliminary screening reaches the preset threshold, the voice that has reached the preset threshold and the voiceprint The recognition model is compared and extracted, the voice frequency and the spectral image that are consistent with the voiceprint recognition model are retained, the voices that are not consistent with the voiceprint recognition model are eliminated, and the voice processed by voiceprint noise reduction is obtained.
本实施方式中,所述终端装置1利用大量获得的说话环境音频以及大量的特定人说话音频建立一个机器学习及深度学习模型;将说话环境音频及特定人说话音频全部转换成为波普图的形式并导入到终端装置1中,通过大量反复训练,通过机器学习及深度学习区分环境噪声(环境音)和特定人说话的语音波普图。In this embodiment, the terminal device 1 uses a large amount of obtained speech environment audio and a large amount of specific person's speech audio to build a machine learning and deep learning model; all the speech environment audio and specific person's speech audio are converted into the form of pop chart It is imported into the terminal device 1, and through a large amount of repeated training, the environmental noise (environmental sound) is distinguished from the voice pop chart of a specific person's speech through machine learning and deep learning.
本实施方式中,每个人独具一格的声纹可以用语普图观察出来。获取特定说话人的声音声纹,将所述人的声纹先进行特征提取操作,用已有的所述特定说话人的声纹建立声纹语谱图,进行声纹语谱图上的特征提取后便可以建立起只属于该人的声纹识别模型。声纹识别模型的建模方法分为三种类型,分别为:文本相关、文本无关和文本提示。由于不能决定输入的语音内容,因此选择文本无关类型进行声纹建模,从而得到所述声纹识别模型。其中,文本无关包括:GMM-UBM、GMM-SVM、GMM-UBM-LF、i-vector/PLDA)。本实施方式中,选取GMM-UBM建立说话人确认系统声纹建模,当输入多个说话人的声音和测试语音,通过MFCC特征向量提取,经过大量人声纹数据的反复训练和MAP自适应处理及确认决策,得到一个声纹识别率较高的人声纹识别模型。本实施方式中,MFCC特征向量提取过程包括输入样本语音,所述样本语音预加重、分帧、加窗,将处理好的样本语音做傅里叶变换,进行Mel频率滤波,进行Log对数能量,对样本求倒谱,输出MFCC图像。In this embodiment, each individual’s unique voiceprint can be observed using the grammar. Acquire the voiceprint of a specific speaker, perform feature extraction operation on the voiceprint of the person, use the existing voiceprint of the specific speaker to build a voiceprint spectrogram, and perform the features on the voiceprint spectrogram After extraction, a voiceprint recognition model that only belongs to the person can be established. The modeling methods of voiceprint recognition models are divided into three types: text-related, text-independent and text prompt. Since the input voice content cannot be determined, the text-independent type is selected for voiceprint modeling, thereby obtaining the voiceprint recognition model. Among them, the text irrelevant includes: GMM-UBM, GMM-SVM, GMM-UBM-LF, i-vector/PLDA). In this embodiment, GMM-UBM is selected to build the voiceprint modeling of the speaker confirmation system. When the voices and test voices of multiple speakers are input, the MFCC feature vector is extracted, and after a large amount of human voiceprint data is repeatedly trained and MAP adaptive Process and confirm the decision to obtain a human voiceprint recognition model with a high voiceprint recognition rate. In this embodiment, the MFCC feature vector extraction process includes input sample speech, the sample speech pre-emphasis, framing, and windowing, the processed sample speech is subjected to Fourier transform, Mel frequency filtering, and Log logarithmic energy , Calculate the cepstrum of the sample and output the MFCC image.
在另一实施方式中,所述终端装置1对所述语音中的白噪声进行滤除。其中,白噪声是指在较宽的频率范围内,各等带宽的频带所含的噪声能量相等的噪声。本实施方式中,可以通过小波变换算法或卡尔曼滤波算法去除所述语音中的白噪声。In another embodiment, the terminal device 1 filters out white noise in the speech. Among them, white noise refers to noise with equal noise energy contained in frequency bands of equal bandwidth within a wider frequency range. In this implementation manner, the white noise in the speech can be removed by the wavelet transform algorithm or the Kalman filter algorithm.
步骤S13、从过滤后的语音中提取声纹特征数据,将所述声纹特征数据 输入到所述预设语音分类模型进行分类得到分类结果,并根据分类结果将相同的声纹特征数据对应的语音进行编码存储为单独的语音文件而将所述语音进行分离处理。Step S13: Extract voiceprint feature data from the filtered voice, input the voiceprint feature data into the preset voice classification model for classification to obtain a classification result, and assign the same voiceprint feature data to the corresponding The speech is encoded and stored as a separate speech file, and the speech is processed separately.
由于现实世界中每个人都具有特定的声纹特征,这是由我们的发声器官在成长过程中逐渐形成的特征,无论别人对我们的说话模仿的多么相似,声纹特征其实都是具有显著区别的。因此,本实施方式中可以利用声纹特征对说话人进行身份验证及对说话人的语音进行区别。在具体的实际应用中,所述声纹特征数据包括,但不限于,梅尔倒谱系数MFCC、感知线性预测系数PLP、深度特征Deep Feature以及能量规整谱系数PNCC等。当所述语音经过噪声过滤后,所述终端装置1通过小波变换技术,提取所述语音中的梅尔倒谱系数MFCC、感知线性预测系数PLP、深度特征Deep Feature或能量规整谱系数PNCC等声纹特征数据,并依据所述梅尔倒谱系数MFCC、感知线性预测系数PLP、深度特征Deep Feature或能量规整谱系数PNCC声纹特征数据输入到所述预设语音分类模型得到分类结果,根据分类结果,将相同的声纹特征数据对应的语音进行编码,存储为单独的语音文件。Since everyone in the real world has specific voiceprint characteristics, which are gradually formed by our vocal organs during the growth process, no matter how similar others imitate our speech, the voiceprint characteristics are actually significantly different of. Therefore, in this embodiment, the voiceprint feature can be used to verify the speaker's identity and distinguish the speaker's voice. In specific practical applications, the voiceprint feature data includes, but is not limited to, Mel cepstrum coefficient MFCC, perceptual linear prediction coefficient PLP, depth feature Deep Feature, and energy regularization spectrum coefficient PNCC. After the voice is filtered by noise, the terminal device 1 uses wavelet transform technology to extract the Mel cepstrum coefficient MFCC, the perceptual linear prediction coefficient PLP, the depth feature Deep Feature, or the energy normalized spectral coefficient PNCC, etc. According to the Mel Cepstral Coefficient MFCC, Perceptual Linear Prediction Coefficient PLP, Deep Feature or Energy Normalized Spectral Coefficient PNCC, the voiceprint feature data is input into the preset voice classification model to obtain the classification result. As a result, the voice corresponding to the same voiceprint feature data is encoded and stored as a separate voice file.
本实施方式中,所述预设语音分类模型包括以下至少一项:向量机模型、随机模型和神经网络模型。具体的,所述终端装置根据提取出的声纹特征数据利用预先训练好的预设语音分类模型确定所述声纹特征数据的类别。在本申请中,所述声纹特征数据的类别包括:第一声纹特征类别、第二声纹特征类别、第三声纹特征类别。本实施方式中,所述预设语音分类模型的训练过程包括:In this embodiment, the preset speech classification model includes at least one of the following: a vector machine model, a random model, and a neural network model. Specifically, the terminal device uses a pre-trained preset voice classification model to determine the category of the voiceprint feature data according to the extracted voiceprint feature data. In this application, the categories of the voiceprint feature data include: a first voiceprint feature category, a second voiceprint feature category, and a third voiceprint feature category. In this embodiment, the training process of the preset voice classification model includes:
1)获取正样本的声纹特征数据及负样本的声纹特征数据,并将正样本的声纹特征数据标注声纹特征类别,以使正样本的声纹特征数据携带声纹特征类别标签。1) Acquire the voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample, and label the voiceprint feature category of the voiceprint feature data of the positive sample, so that the voiceprint feature data of the positive sample carry the voiceprint feature category label.
例如,分别选取500个第一声纹特征类别、第二声纹特征类别、第三声纹特征类别对应的声纹特征数据,并对每个声纹特征数据标注类别,可以以“1”作为第一声纹特征类别的声纹特征标签,以“2”作为第二声纹特征类别的声纹特征标签,以“3”作为第三声纹特征类别的声纹特征标签。For example, to select 500 voiceprint feature data corresponding to the first voiceprint feature category, the second voiceprint feature category, and the third voiceprint feature category respectively, and to label each voiceprint feature data category, you can use "1" as For the voiceprint feature label of the first voiceprint feature category, "2" is used as the voiceprint feature label of the second voiceprint feature category, and "3" is used as the voiceprint feature label of the third voiceprint feature category.
2)将所述正样本的声纹特征数据及所述负样本的声纹特征数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述预设语音分类模型,并利用所述验证集验证训练后的所述预设语音分类模型的准确率。2) The voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample are randomly divided into a training set with a first preset ratio and a verification set with a second preset ratio, and the training set is used to train the Preset a speech classification model, and use the verification set to verify the accuracy of the preset speech classification model after training.
先将不同声纹特征的训练集中的训练样本分发到不同的文件夹里。例如,将第一声纹特征类别的训练样本分发到第一文件夹里、第二声纹特征类别的训练样本分发到第二文件夹里、第三声纹特征类别的训练样本分发到第三文件夹里。然后从不同的文件夹里分别提取第一预设比例(例如,70%)的训 练样本作为总的训练样本进行所述预设语音分类模型的训练,从不同的文件夹里分别取剩余第二预设比例(例如,30%)的训练样本作为总的测试样本对训练完成的所述预设语音分类模型进行准确性验证。First distribute the training samples in the training set with different voiceprint characteristics to different folders. For example, the training samples of the first voiceprint feature category are distributed to the first folder, the training samples of the second voiceprint feature category are distributed to the second folder, and the training samples of the third voiceprint feature category are distributed to the third folder. Folder. Then extract the training samples of the first preset ratio (for example, 70%) from different folders as the total training samples to train the preset voice classification model, and take the remaining second samples from different folders. A preset proportion (for example, 30%) of training samples is used as a total test sample to verify the accuracy of the preset voice classification model after training.
3)若所述准确率大于或者等于预设准确率时,则结束训练,以训练后的所述预设语音分类模型作为分类器识别所述声纹特征数据的类别;若所述准确率小于预设准确率时,则增加正样本数量及负样本数量以重新训练所述预设语音分类模型直至所述准确率大于或者等于预设准确率。3) If the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, and the trained preset voice classification model is used as a classifier to identify the category of the voiceprint feature data; if the accuracy rate is less than When the accuracy rate is preset, the number of positive samples and the number of negative samples are increased to retrain the preset voice classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
本实施方式中,所述终端装置1还用于将相同的声纹特征数据对应的语音进行加强放大处理;对经过加强放大处理后的语音进行编码。也即,所述终端装置1把所述语音中按照声纹特征分离出来不同声纹特征的语音后将分离出来的语音分别加强放大处理,并将对应于相同的声纹特征的语音进行编码,存储为单独的语音文件,并单独存储为语音文件。In this embodiment, the terminal device 1 is also used to perform enhanced amplification processing on the voice corresponding to the same voiceprint feature data; and encode the voice after the enhanced amplification processing. That is, the terminal device 1 separates the voices with different voiceprint features from the voice according to the voiceprint features, respectively strengthens the amplification process of the separated voices, and encodes the voices corresponding to the same voiceprint features. Stored as a separate voice file, and stored separately as a voice file.
步骤S14、对经过分离处理后的语音进行识别以获取所述语音的识别文本。Step S14: Recognizing the speech after the separation process to obtain the recognized text of the speech.
本实施方式中,所述终端装置1通过语音识别将经过分离处理后的语音转化为文本,作为初始语音识别文本;并将所述初始语音识别文本与预设文本数据库进行匹配,得到匹配后的语音识别文本。In this embodiment, the terminal device 1 converts the separated speech into text through speech recognition, and uses it as the initial speech recognition text; and matches the initial speech recognition text with a preset text database to obtain the matched Speech recognition text.
本实施方式中,所述终端装置1通过语音识别将经过分离处理后的语音转化为文本的具体过程包括:In this embodiment, the specific process of the terminal device 1 converting the separated speech into text through speech recognition includes:
1)提取所述语音的音频特征,转换为预设长度的声学特征向量;1) Extract audio features of the voice and convert them into acoustic feature vectors of preset length;
2)根据解码算法将所述特征向量解码成词序;2) Decode the feature vector into word order according to the decoding algorithm;
3)通过HMM音素模型得到对应词序的子词,所述子词为声母和韵母;3) Obtain the subwords corresponding to the word order through the HMM phoneme model, and the subwords are initials and vowels;
4)根据预设的发音字典将多个子词拼接成文字;4) Combine multiple subwords into text according to the preset pronunciation dictionary;
5)使用语言模型语法规则解码得到最优序列,得到文本。5) Use the language model grammar rules to decode to get the optimal sequence and get the text.
本实施方式中,所述语法规则为维特比算法。例如,所述待识别的语音为“你好”,经过特征提取后转化为39维的声学特征向量,通过多个HMM音素模型得到对应的多个子词/n//i//h//ao/,根据预设的发音字典将多个子词拼接成字,如你,尼;好,号。通过维特比算法解码得到最优序列“你好”并将文本输出。In this embodiment, the grammar rule is the Viterbi algorithm. For example, the voice to be recognized is "Hello", which is transformed into a 39-dimensional acoustic feature vector after feature extraction, and multiple corresponding sub-words /n//i//h//ao are obtained through multiple HMM phoneme models /, Splice multiple sub-words into characters according to the preset pronunciation dictionary, such as you, Nepal; good, number. Decode by Viterbi algorithm to obtain the optimal sequence "Hello" and output the text.
本实施方式中,可以预先设置至少两个文本数据库,例如,第一文本数据库及第二文本数据库。第一文本数据库可以专用于存储多个语气词,如“嗯”、“啊”、“是吧”等,语气词与会议内容无关,且又易影响语音转换为文本后的可读性。第二文本数据库可以专用于存储多个专业词及对应的拼音,如“特征向量”、“特征矩阵”、“张量分析”等,专业词较复杂,因而在识别语音的过程中容易批量出现错误。本申请还可以根据实际情况预先设置第三文本数据库等,专用于存储诸如姓名或者地名等的语句。本文对于预先设置的本文数据 库的数量及对应的内容不作具体限制。In this embodiment, at least two text databases may be preset, for example, a first text database and a second text database. The first text database can be dedicated to storing multiple modal particles, such as "um", "ah", "yeah", etc. The modal particles have nothing to do with the content of the meeting and easily affect the readability of the speech converted into text. The second text database can be dedicated to storing multiple professional words and their corresponding pinyin, such as "feature vector", "feature matrix", "tensor analysis", etc. The professional words are more complex, so they tend to appear in batches during the process of speech recognition error. In this application, a third text database can also be preset according to the actual situation, specifically for storing sentences such as names or place names. This article does not make specific restrictions on the number of pre-set databases and corresponding contents of this article.
本实施方式中,所述终端装置1将所述初始语音识别文本与预设文本数据库进行匹配具体包括:In this embodiment, the terminal device 1 matching the initial voice recognition text with a preset text database specifically includes:
1)将所述初始语音识别文本与预设第一文本数据库进行匹配,得到第一匹配结果;及1) Match the initial speech recognition text with a preset first text database to obtain a first matching result; and
2)将所述第一匹配结果与预设第二文本数据库进行匹配,得到第二匹配结果;2) Match the first matching result with a preset second text database to obtain a second matching result;
具体地,所述将所述初始语音识别文本与预设第一文本数据库进行匹配包括:判断所述初始语音识别文本中是否存在与预设第一文本数据库中的词语相匹配的第一词语;当确定所述初始语音识别文本中存在与预设第一文本数据库中的词语相匹配的第一词语时,将所述初始语音识别文本中相匹配的第一词语进行处理。Specifically, the matching the initial speech recognition text with a preset first text database includes: determining whether there is a first word in the initial speech recognition text that matches a word in the preset first text database; When it is determined that there is a first word that matches a word in the preset first text database in the initial voice recognition text, the first word that matches in the initial voice recognition text is processed.
优选地,所述将所述初始语音识别文本中相匹配的第一词语进行处理还可以进一步包括:根据预先训练的所述基于深度学习网络的语气词模型判断所述相匹配的第一词语是否为待删除的语气词;当确定所述相匹配的第一词语为待删除的语气词时,将所述初始语音识别文本中相匹配的第一词语进行剔除;当确定所述相匹配的第一词语不为待删除的语气词时,将所述初始语音识别文本中相匹配的第一词语进行保留。Preferably, the processing of the matching first word in the initial speech recognition text may further include: judging whether the matching first word is based on the pre-trained modal particle model based on the deep learning network Is the modal particle to be deleted; when it is determined that the first matching word is the modal particle to be deleted, the first matching word in the initial speech recognition text is eliminated; when the matching first word is determined When a word is not a modal particle to be deleted, the first matching word in the initial speech recognition text is retained.
举例而言,假设初始语音识别文本为“这个挺好用的”,预设第一文本数据库中存储有语气词“这个”,则将初始语音识别文本与预设第一文本数据库进行匹配后确定了相匹配的词语为“这个”,然后根据预先训练的所述基于深度学习网络的语气词模型判断所述相匹配的第一词语“这个”是否为待删除的语气词,所述基于深度学习网络的语气词模型确定相匹配的第一词语“这个”在“这个挺好用的”中不属于待删除的语气词,则将所述初始语音识别文本中相匹配的第一词语进行保留,得到的第一匹配结果为“这个挺好用的”。For example, suppose the initial speech recognition text is "this is very easy to use", and the modal word "this" is stored in the preset first text database, then the initial speech recognition text is matched with the preset first text database to determine The matched word is "this", and then judge whether the matched first word "this" is the modal particle to be deleted according to the pre-trained modal particle model based on the deep learning network. The network's modal particle model determines that the matched first word "this" does not belong to the modal particle to be deleted in "this is very useful", then the first matching word in the initial speech recognition text is retained, The first matching result obtained is "This is pretty easy to use".
再如,假设初始语音识别文本为“这个,我们要开会了”,预设第一文本数据库中存储有语气词“这个”,则将初始语音识别文本与预设第一文本数据库进行匹配后确定了相匹配的词语为“这个”,然后根据预先训练的所述基于深度学习网络的语气词模型判断所述相匹配的第一词语“这个”是否为待删除的语气词,所述基于深度学习网络的语气词模型确定相匹配的第一词语“这个”在“这个,我们要开会了”中属于待删除的语气词,则将所述初始语音识别文本中相匹配的第一词语进行剔除,得到的第一匹配结果为“我们要开会了”。For another example, suppose the initial speech recognition text is "this, we are going to have a meeting", and the first text database is preset to store the modal word "this", then the initial speech recognition text is matched with the preset first text database to determine The matched word is "this", and then judge whether the matched first word "this" is the modal particle to be deleted according to the pre-trained modal particle model based on the deep learning network. The network's modal particle model determines that the first matching word "this" belongs to the modal particle to be deleted in "this, we are going to have a meeting", and then the first matching word in the initial speech recognition text is eliminated. The first matching result obtained was "We are going to have a meeting."
具体地,所述将所述第一匹配结果与预设第二文本数据库进行匹配包括:Specifically, the matching the first matching result with a preset second text database includes:
1)将所述第一匹配结果中的词语转换为第一拼音;1) Convert the words in the first matching result to the first pinyin;
2)判断所述预设第二文本数据库中是否存在与所述第一拼音相同的第二拼音;2) Determine whether there is a second pinyin that is the same as the first pinyin in the preset second text database;
3)当确定所述预设第二文本数据库中存在与所述第一拼音相同的第二拼音时,将第二拼音对应的词语提取出来,作为第一拼音对应的词语。3) When it is determined that there is a second pinyin that is the same as the first pinyin in the preset second text database, the words corresponding to the second pinyin are extracted as the words corresponding to the first pinyin.
例如,假设第一匹配结果为“这是一个原始巨震”,将第一匹配结果中的词语转换为第一拼音为“zhe shi yige yuanshi juzhen”;预设第二文本数据库中储存有专业词“矩阵”及对应的第二拼音“juzheng”,则在确定所述预设第二文本数据库中存在与所述第一拼音相同的第二拼音时,将第二拼音“juzheng”对应的词语“矩阵”提取出来,作为第一拼音“juzheng”对应的词语,得到的第二匹配结果为“这是一个原始矩阵”。For example, suppose that the first matching result is "this is an original giant earthquake", and the words in the first matching result are converted to the first pinyin as "zhe shi yige yuanshi Juzhen"; the second text database is preset to store professional words "Matrix" and the corresponding second pinyin "juzheng", when it is determined that there is a second pinyin identical to the first pinyin in the preset second text database, the word "juzheng" corresponding to the second pinyin " "Matrix" is extracted as the word corresponding to the first pinyin "juzheng", and the second matching result obtained is "This is an original matrix".
本申请通过语音识别技术将经过分离处理后的语音转化为文本,作为初始语音识别文本;并将所述初始语音识别文本与预设文本数据库进行匹配,得到匹配后的语音识别文本,可以识别出语音中不同人说的话语的语音文本,方便记录人员进行信息汇整。This application converts the separated speech into text through speech recognition technology, as the initial speech recognition text; and matches the initial speech recognition text with a preset text database to obtain the matched speech recognition text, which can be recognized The voice text of the words spoken by different people in the voice is convenient for the recorder to gather information.
实施例2Example 2
图3为本申请一实施方式中音频分离装置40的示意图。FIG. 3 is a schematic diagram of an audio separation device 40 in an embodiment of the application.
在一些实施例中,所述音频分离装置40运行于电子设备中。所述音频分离装置40可以包括多个由程序代码段所组成的功能模块。所述音频分离装置40中的各个程序段的程序代码可以存储于存储器中,并由至少一个处理器所执行,以执行音频分离的功能。In some embodiments, the audio separation device 40 runs in an electronic device. The audio separation device 40 may include multiple functional modules composed of program code segments. The program code of each program segment in the audio separation device 40 can be stored in a memory and executed by at least one processor to perform the audio separation function.
本实施例中,所述音频分离装置40根据其所执行的功能,可以被划分为多个功能模块。参阅图3所示,所述音频分离装置40可以包括获取模块401、噪声过滤模块402、语音分离模块403及文本识别模块404。本申请所称的模块是指一种能够被至少一个处理器所执行并且能够完成固定功能的一系列计算机程序段,其存储在存储器中。在一些实施例中,关于各模块的功能将在后续的实施例中详述。In this embodiment, the audio separation device 40 can be divided into multiple functional modules according to the functions it performs. Referring to FIG. 3, the audio separation device 40 may include an acquisition module 401, a noise filtering module 402, a speech separation module 403, and a text recognition module 404. The module referred to in this application refers to a series of computer program segments that can be executed by at least one processor and can complete fixed functions, and are stored in a memory. In some embodiments, the functions of each module will be detailed in subsequent embodiments.
所述获取模块401用于获取语音。The acquiring module 401 is used for acquiring voice.
所述获取模块401通过所述语音获取单元11获取语音。本实施方式中,所述语音获取单元11可以包括,但不限于电动式、电容式、压电式、电磁式、半导体式麦克风。在另一实施方式中,所述获取模块401可以接收与所述终端装置1通信连接的外部设备2发送的语音。在其他实施方式中,所述获取模块401从终端装置1的存储设备中获取语音。The acquisition module 401 acquires voice through the voice acquisition unit 11. In this embodiment, the voice acquisition unit 11 may include, but is not limited to, an electric, capacitive, piezoelectric, electromagnetic, and semiconductor microphone. In another embodiment, the acquisition module 401 can receive the voice sent by the external device 2 communicatively connected with the terminal device 1. In other embodiments, the acquiring module 401 acquires the voice from the storage device of the terminal device 1.
所述噪声过滤模块402用于对所述语音进行噪声过滤。The noise filtering module 402 is configured to perform noise filtering on the speech.
在一实施方式中,所述噪声过滤模块402对所述语音中的环境噪声进行过滤。在一具体实施方式中,所述噪声过滤模块402检测获取的语音的语音分贝是否处于预设分贝阈值范围,当所述语音的语音分贝不在预设分贝阈值范围内时,所述噪声过滤模块402对所述语音进行噪声过滤。所述预设分贝 阈值可以根据需要进行设定,本实施方式中,所述预设分贝阈值范围可以设定为70-80db。所述噪声过滤模块402从所述语音中,选择语音分贝超过第一分贝阈值的语音信息作为所述环境噪声,并将语音分贝超过第一分贝阈值的所述环境噪声进行删除,如此实现对所述语音的噪声过滤。本实施方式中,根据需要可以对所述第一分贝阈值进行设定,例如可将所述第一分贝阈值设定为80db。In one embodiment, the noise filtering module 402 filters the environmental noise in the speech. In a specific embodiment, the noise filtering module 402 detects whether the voice decibel of the acquired voice is within a preset decibel threshold range, and when the voice decibel of the voice is not within the preset decibel threshold range, the noise filtering module 402 Perform noise filtering on the voice. The preset decibel threshold can be set as required. In this embodiment, the preset decibel threshold range can be set to 70-80db. The noise filtering module 402 selects the voice information whose voice decibel exceeds the first decibel threshold as the environmental noise from the voice, and deletes the environmental noise whose voice decibel exceeds the first decibel threshold, so as to realize the The noise filtering of the speech. In this embodiment, the first decibel threshold can be set as required, for example, the first decibel threshold can be set to 80db.
在一实施方式中,所述噪声过滤模块402通过深度学习的声纹降噪方法对所述语音中的环境噪声进行过滤。在具体实施方法中,所述通过深度学习的声纹降噪方法对所述语音中的环境噪声进行过滤包括:建立机器学习及深度学习模型;建立声纹识别模型;将获取的所述语音通过机器学习及深度学习模型进行学习,对所述语音中的环境噪声进行识别区分;将经过所述机器学习及深度学习模型识别后的语音进行过滤,剔除掉所述语音中不属于人说话音频的环境噪声,得到经过初步筛查的语音;判断经过初步筛查的语音是否达到预设阈值;当确定经过初步筛查的语音达到预设阈值时,将达到预设阈值的语音与所述声纹识别模型进行对比提取,保留与所述声纹识别模型相符合的语音频率及语谱图像,剔除掉与所述声纹识别模型不符合的语音,得到声纹降噪处理的语音。In one embodiment, the noise filtering module 402 filters the environmental noise in the speech by using a deep learning voiceprint noise reduction method. In a specific implementation method, the voiceprint noise reduction method through deep learning to filter the environmental noise in the voice includes: establishing a machine learning and deep learning model; establishing a voiceprint recognition model; and passing the acquired voice through Machine learning and deep learning models are used for learning, and the environmental noise in the speech is recognized and distinguished; the speech after recognition by the machine learning and deep learning models is filtered to remove the speech that does not belong to human speech audio Ambient noise, get the voice that has undergone preliminary screening; determine whether the voice that has undergone the preliminary screening reaches the preset threshold; when it is determined that the voice that has undergone the preliminary screening reaches the preset threshold, the voice that has reached the preset threshold and the voiceprint The recognition model is compared and extracted, the voice frequency and the spectral image that are consistent with the voiceprint recognition model are retained, the voices that are not consistent with the voiceprint recognition model are eliminated, and the voice processed by voiceprint noise reduction is obtained.
本实施方式中,所述噪声过滤模块402利用大量获得的说话环境音频以及大量的特定人说话音频建立一个机器学习及深度学习模型;将说话环境音频及特定人说话音频全部转换成为波普图的形式并导入到终端装置1中,通过大量反复训练,通过机器学习及深度学习区分环境噪声(环境音)和特定人说话的语音波普图。In this embodiment, the noise filtering module 402 uses a large amount of obtained speech environment audio and a large amount of specific person's speech audio to build a machine learning and deep learning model; converts all the speech environment audio and specific person's speech audio into a pop chart. The format is imported into the terminal device 1, and through a lot of repeated training, the environmental noise (environmental sound) and the voice pop chart of a specific person's speech are distinguished through machine learning and deep learning.
本实施方式中,每个人独具一格的声纹可以用语普图观察出来。获取特定说话人的声音声纹,将所述人的声纹先进行特征提取操作,用已有的所述特定说话人的声纹建立声纹语谱图,进行声纹语谱图上的特征提取后便可以建立起只属于该人的声纹识别模型。声纹识别模型的建模方法分为三种类型,分别为:文本相关、文本无关和文本提示。由于不能决定输入的语音内容,因此选择文本无关类型进行声纹建模,从而得到所述声纹识别模型。其中,文本无关包括:GMM-UBM、GMM-SVM、GMM-UBM-LF、i-vector/PLDA)。本实施方式中,选取GMM-UBM建立说话人确认系统声纹建模,当输入多个说话人的声音和测试语音,通过MFCC特征向量提取,经过大量人声纹数据的反复训练和MAP自适应处理及确认决策,得到一个声纹识别率较高的人声纹识别模型。本实施方式中,MFCC特征向量提取过程包括输入样本语音,所述样本语音预加重、分帧、加窗,将处理好的样本语音做傅里叶变换,进行Mel频率滤波,进行Log对数能量,对样本求倒谱,输出MFCC图像。In this embodiment, each individual’s unique voiceprint can be observed using the grammar. Acquire the voiceprint of a specific speaker, perform feature extraction operation on the voiceprint of the person, use the existing voiceprint of the specific speaker to build a voiceprint spectrogram, and perform the features on the voiceprint spectrogram After extraction, a voiceprint recognition model that only belongs to the person can be established. The modeling methods of voiceprint recognition models are divided into three types: text-related, text-independent and text prompt. Since the input voice content cannot be determined, the text-independent type is selected for voiceprint modeling, thereby obtaining the voiceprint recognition model. Among them, the text irrelevant includes: GMM-UBM, GMM-SVM, GMM-UBM-LF, i-vector/PLDA). In this embodiment, GMM-UBM is selected to build the voiceprint modeling of the speaker confirmation system. When the voices and test voices of multiple speakers are input, the MFCC feature vector is extracted, and after a large amount of human voiceprint data is repeatedly trained and MAP adaptive Process and confirm the decision to obtain a human voiceprint recognition model with a high voiceprint recognition rate. In this embodiment, the MFCC feature vector extraction process includes input sample speech, the sample speech pre-emphasis, framing, and windowing, the processed sample speech is subjected to Fourier transform, Mel frequency filtering, and Log logarithmic energy , Calculate the cepstrum of the sample and output the MFCC image.
在另一实施方式中,所述噪声过滤模块402对所述语音中的白噪声进行 滤除。其中,白噪声是指在较宽的频率范围内,各等带宽的频带所含的噪声能量相等的噪声。本实施方式中,可以通过小波变换算法或卡尔曼滤波算法去除所述语音中的白噪声。In another embodiment, the noise filtering module 402 filters white noise in the speech. Among them, white noise refers to noise with equal noise energy contained in frequency bands of equal bandwidth within a wider frequency range. In this implementation manner, the white noise in the speech can be removed by the wavelet transform algorithm or the Kalman filter algorithm.
所述语音分离模块403用于利用预设语音分类模型对过滤后的语音按照语音的声纹特征做分离处理。The voice separation module 403 is configured to use a preset voice classification model to perform separation processing on the filtered voice according to the voiceprint features of the voice.
本实施方式中,所述语音分离模块403利用预设语音分类模型对过滤后的语音按照语音的声纹特征做分离处理包括:从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到所述预设语音分类模型进行分类得到分类结果,根据分类结果,将相同的声纹特征数据对应的语音进行编码,存储为单独的语音文件,如此实现对所述语音进行分离处理。In this embodiment, the voice separation module 403 uses a preset voice classification model to separate the filtered voice according to the voiceprint features of the voice, including: extracting voiceprint feature data from the filtered voice, and dividing the voiceprint The feature data is input into the preset voice classification model for classification to obtain the classification result. According to the classification result, the voice corresponding to the same voiceprint feature data is coded and stored as a separate voice file, thus realizing the separation processing of the voice .
由于现实世界中每个人都具有特定的声纹特征,这是由我们的发声器官在成长过程中逐渐形成的特征,无论别人对我们的说话模仿的多么相似,声纹特征其实都是具有显著区别的。因此,本实施方式中可以利用声纹特征对说话人进行身份验证及对说话人的语音进行区别。在具体的实际应用中,所述声纹特征数据包括,但不限于,梅尔倒谱系数MFCC、感知线性预测系数PLP、深度特征Deep Feature以及能量规整谱系数PNCC等。当所述语音经过噪声过滤后,所述语音分离模块403通过小波变换技术,提取所述语音中的梅尔倒谱系数MFCC、感知线性预测系数PLP、深度特征Deep Feature或能量规整谱系数PNCC等声纹特征数据,并依据所述梅尔倒谱系数MFCC、感知线性预测系数PLP、深度特征Deep Feature或能量规整谱系数PNCC声纹特征数据输入到所述预设语音分类模型得到分类结果,根据分类结果,将相同的声纹特征数据对应的语音进行编码,存储为单独的语音文件。Since everyone in the real world has specific voiceprint characteristics, which are gradually formed by our vocal organs during the growth process, no matter how similar others imitate our speech, the voiceprint characteristics are actually significantly different of. Therefore, in this embodiment, the voiceprint feature can be used to verify the speaker's identity and distinguish the speaker's voice. In specific practical applications, the voiceprint feature data includes, but is not limited to, Mel cepstrum coefficient MFCC, perceptual linear prediction coefficient PLP, depth feature Deep Feature, and energy regularization spectrum coefficient PNCC. After the voice is filtered by noise, the voice separation module 403 uses wavelet transform technology to extract the Mel cepstrum coefficient MFCC, the perceptual linear prediction coefficient PLP, the depth feature Deep Feature or the energy normalized spectral coefficient PNCC, etc. Voiceprint feature data, and input the voiceprint feature data into the preset voice classification model to obtain the classification result according to the Mel cepstrum coefficient MFCC, the perceptual linear prediction coefficient PLP, the depth feature Deep Feature or the energy normalized spectrum coefficient PNCC. As a result of the classification, the voice corresponding to the same voiceprint feature data is encoded and stored as a separate voice file.
本实施方式中,所述预设语音分类模型包括以下至少一项:向量机模型、随机模型和神经网络模型。具体的,所述终端装置根据提取出的声纹特征数据利用预先训练好的预设语音分类模型确定所述声纹特征数据的类别。在本申请中,所述声纹特征数据的类别包括:第一声纹特征类别、第二声纹特征类别、第三声纹特征类别。本实施方式中,将所述声纹特征数据输入到所述预设语音分类模型进行分类得到分类结果的训练过程包括:In this embodiment, the preset speech classification model includes at least one of the following: a vector machine model, a random model, and a neural network model. Specifically, the terminal device uses a pre-trained preset voice classification model to determine the category of the voiceprint feature data according to the extracted voiceprint feature data. In this application, the categories of the voiceprint feature data include: a first voiceprint feature category, a second voiceprint feature category, and a third voiceprint feature category. In this embodiment, the training process of inputting the voiceprint feature data into the preset voice classification model for classification to obtain the classification result includes:
1)获取正样本的声纹特征数据及负样本的声纹特征数据,并将正样本的声纹特征数据标注声纹特征类别,以使正样本的声纹特征数据携带声纹特征类别标签。1) Acquire the voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample, and label the voiceprint feature category of the voiceprint feature data of the positive sample, so that the voiceprint feature data of the positive sample carry the voiceprint feature category label.
例如,分别选取500个第一声纹特征类别、第二声纹特征类别、第三声纹特征类别对应的声纹特征数据,并对每个声纹特征数据标注类别,可以以“1”作为第一声纹特征类别的声纹特征标签,以“2”作为第二声纹特征类别的声纹特征标签,以“3”作为第三声纹特征类别的声纹特征标签。For example, to select 500 voiceprint feature data corresponding to the first voiceprint feature category, the second voiceprint feature category, and the third voiceprint feature category respectively, and to label each voiceprint feature data category, you can use "1" as For the voiceprint feature label of the first voiceprint feature category, "2" is used as the voiceprint feature label of the second voiceprint feature category, and "3" is used as the voiceprint feature label of the third voiceprint feature category.
2)将所述正样本的声纹特征数据及所述负样本的声纹特征数据随机分成 第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述预设语音分类模型,并利用所述验证集验证训练后的所述预设语音分类模型的准确率。2) The voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample are randomly divided into a training set with a first preset ratio and a verification set with a second preset ratio, and the training set is used to train the Preset a speech classification model, and use the verification set to verify the accuracy of the preset speech classification model after training.
先将不同声纹特征的训练集中的训练样本分发到不同的文件夹里。例如,将第一声纹特征类别的训练样本分发到第一文件夹里、第二声纹特征类别的训练样本分发到第二文件夹里、第三声纹特征类别的训练样本分发到第三文件夹里。然后从不同的文件夹里分别提取第一预设比例(例如,70%)的训练样本作为总的训练样本进行所述预设语音分类模型的训练,从不同的文件夹里分别取剩余第二预设比例(例如,30%)的训练样本作为总的测试样本对训练完成的所述预设语音分类模型进行准确性验证。First distribute the training samples in the training set with different voiceprint characteristics to different folders. For example, the training samples of the first voiceprint feature category are distributed to the first folder, the training samples of the second voiceprint feature category are distributed to the second folder, and the training samples of the third voiceprint feature category are distributed to the third folder. Folder. Then extract the training samples of the first preset ratio (for example, 70%) from different folders as the total training samples to train the preset voice classification model, and take the remaining second samples from different folders. A preset proportion (for example, 30%) of training samples is used as a total test sample to verify the accuracy of the preset voice classification model after training.
3)若所述准确率大于或者等于预设准确率时,则结束训练,以训练后的所述预设语音分类模型作为分类器识别所述声纹特征数据的类别;若所述准确率小于预设准确率时,则增加正样本数量及负样本数量以重新训练所述预设语音分类模型直至所述准确率大于或者等于预设准确率。3) If the accuracy rate is greater than or equal to the preset accuracy rate, the training ends, and the trained preset voice classification model is used as a classifier to identify the category of the voiceprint feature data; if the accuracy rate is less than When the accuracy rate is preset, the number of positive samples and the number of negative samples are increased to retrain the preset voice classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
本实施方式中,所述语音分离模块403还用于将相同的声纹特征数据对应的语音进行加强放大处理;对经过加强放大处理后的语音进行编码。也即,所述终端装置1把所述语音中按照声纹特征分离出来不同声纹特征的语音后将分离出来的语音分别加强放大处理,并将对应于相同的声纹特征的语音进行编码,存储为单独的语音文件,并单独存储为语音文件。In this embodiment, the voice separation module 403 is also used to perform enhanced amplification processing on the voice corresponding to the same voiceprint feature data; and encode the voice after the enhanced amplification processing. That is, the terminal device 1 separates the voices with different voiceprint features from the voice according to the voiceprint features, respectively strengthens the amplification process of the separated voices, and encodes the voices corresponding to the same voiceprint features. Stored as a separate voice file, and stored separately as a voice file.
所述文本识别模块404用于对经过分离处理后的语音进行识别以获取所述语音的识别文本。The text recognition module 404 is configured to recognize the speech after the separation process to obtain the recognized text of the speech.
本实施方式中,所述文本识别模块404通过语音识别将经过分离处理后的语音转化为文本,作为初始语音识别文本;并将所述初始语音识别文本与预设文本数据库进行匹配,得到匹配后的语音识别文本。In this embodiment, the text recognition module 404 converts the separated speech into text through speech recognition, as the initial speech recognition text; and matches the initial speech recognition text with a preset text database to obtain the matching Speech recognition text.
本实施方式中,所述文本识别模块404通过语音识别将经过分离处理后的语音转化为文本的具体过程包括:In this embodiment, the specific process for the text recognition module 404 to convert the separated speech into text through speech recognition includes:
1)提取所述语音的音频特征,转换为预设长度的声学特征向量;1) Extract audio features of the voice and convert them into acoustic feature vectors of preset length;
2)根据解码算法将所述特征向量解码成词序;2) Decode the feature vector into word order according to the decoding algorithm;
3)通过HMM音素模型得到对应词序的子词,所述子词为声母和韵母;3) Obtain the subwords corresponding to the word order through the HMM phoneme model, and the subwords are initials and vowels;
4)根据预设的发音字典将多个子词拼接成文字;4) Combine multiple subwords into text according to the preset pronunciation dictionary;
5)使用语言模型语法规则解码得到最优序列,得到文本。5) Use the language model grammar rules to decode to get the optimal sequence and get the text.
本实施方式中,所述语法规则为维特比算法。例如,所述待识别的语音为“你好”,经过特征提取后转化为39维的声学特征向量,通过多个HMM音素模型得到对应的多个子词/n//i//h//ao/,根据预设的发音字典将多个子词拼接成字,如你,尼;好,号。通过维特比算法解码得到最优序列“你好”并将文本输出。In this embodiment, the grammar rule is the Viterbi algorithm. For example, the voice to be recognized is "Hello", which is transformed into a 39-dimensional acoustic feature vector after feature extraction, and multiple corresponding sub-words /n//i//h//ao are obtained through multiple HMM phoneme models /, Splice multiple sub-words into characters according to the preset pronunciation dictionary, such as you, Nepal; good, number. Decode by Viterbi algorithm to obtain the optimal sequence "Hello" and output the text.
本实施方式中,可以预先设置至少两个文本数据库,例如,第一文本数据库及第二文本数据库。第一文本数据库可以专用于存储多个语气词,如“嗯”、“啊”、“是吧”等,语气词与会议内容无关,且又易影响语音转换为文本后的可读性。第二文本数据库可以专用于存储多个专业词及对应的拼音,如“特征向量”、“特征矩阵”、“张量分析”等,专业词较复杂,因而在识别语音的过程中容易批量出现错误。本申请还可以根据实际情况预先设置第三文本数据库等,专用于存储诸如姓名或者地名等的语句。本文对于预先设置的本文数据库的数量及对应的内容不作具体限制。In this embodiment, at least two text databases may be preset, for example, a first text database and a second text database. The first text database can be dedicated to storing multiple modal particles, such as "um", "ah", "yeah", etc. The modal particles have nothing to do with the content of the meeting and easily affect the readability of the speech converted into text. The second text database can be dedicated to storing multiple professional words and their corresponding pinyin, such as "feature vector", "feature matrix", "tensor analysis", etc. The professional words are more complex, so they tend to appear in batches during the process of speech recognition error. In this application, a third text database can also be preset according to the actual situation, specifically for storing sentences such as names or place names. This article does not make specific restrictions on the number of pre-set text databases and corresponding content.
本实施方式中,所述文本识别模块404将所述初始语音识别文本与预设文本数据库进行匹配具体包括:In this embodiment, the text recognition module 404 matching the initial speech recognition text with a preset text database specifically includes:
1)将所述初始语音识别文本与预设第一文本数据库进行匹配,得到第一匹配结果;及1) Match the initial speech recognition text with a preset first text database to obtain a first matching result; and
2)将所述第一匹配结果与预设第二文本数据库进行匹配,得到第二匹配结果;2) Match the first matching result with a preset second text database to obtain a second matching result;
具体地,所述将所述初始语音识别文本与预设第一文本数据库进行匹配包括:判断所述初始语音识别文本中是否存在与预设第一文本数据库中的词语相匹配的第一词语;当确定所述初始语音识别文本中存在与预设第一文本数据库中的词语相匹配的第一词语时,将所述初始语音识别文本中相匹配的第一词语进行处理。Specifically, the matching the initial speech recognition text with a preset first text database includes: determining whether there is a first word in the initial speech recognition text that matches a word in the preset first text database; When it is determined that there is a first word that matches a word in the preset first text database in the initial voice recognition text, the first word that matches in the initial voice recognition text is processed.
优选地,所述将所述初始语音识别文本中相匹配的第一词语进行处理还可以进一步包括:根据预先训练的所述基于深度学习网络的语气词模型判断所述相匹配的第一词语是否为待删除的语气词;当确定所述相匹配的第一词语为待删除的语气词时,将所述初始语音识别文本中相匹配的第一词语进行剔除;当确定所述相匹配的第一词语不为待删除的语气词时,将所述初始语音识别文本中相匹配的第一词语进行保留。Preferably, the processing of the matching first word in the initial speech recognition text may further include: judging whether the matching first word is based on the pre-trained modal particle model based on the deep learning network Is the modal particle to be deleted; when it is determined that the first matching word is the modal particle to be deleted, the first matching word in the initial speech recognition text is eliminated; when the matching first word is determined When a word is not a modal particle to be deleted, the first matching word in the initial speech recognition text is retained.
举例而言,假设初始语音识别文本为“这个挺好用的”,预设第一文本数据库中存储有语气词“这个”,则将初始语音识别文本与预设第一文本数据库进行匹配后确定了相匹配的词语为“这个”,然后根据预先训练的所述基于深度学习网络的语气词模型判断所述相匹配的第一词语“这个”是否为待删除的语气词,所述基于深度学习网络的语气词模型确定相匹配的第一词语“这个”在“这个挺好用的”中不属于待删除的语气词,则将所述初始语音识别文本中相匹配的第一词语进行保留,得到的第一匹配结果为“这个挺好用的”。For example, suppose the initial speech recognition text is "this is very easy to use", and the modal word "this" is stored in the preset first text database, then the initial speech recognition text is matched with the preset first text database to determine The matched word is "this", and then judge whether the matched first word "this" is the modal particle to be deleted according to the pre-trained modal particle model based on the deep learning network. The network's modal particle model determines that the matched first word "this" does not belong to the modal particle to be deleted in "this is very useful", then the first matching word in the initial speech recognition text is retained, The first matching result obtained is "This is pretty easy to use".
再如,假设初始语音识别文本为“这个,我们要开会了”,预设第一文本数据库中存储有语气词“这个”,则将初始语音识别文本与预设第一文本数据库进行匹配后确定了相匹配的词语为“这个”,然后根据预先训练的所述基于深度学习网络的语气词模型判断所述相匹配的第一词语“这个”是否为待删除 的语气词,所述基于深度学习网络的语气词模型确定相匹配的第一词语“这个”在“这个,我们要开会了”中属于待删除的语气词,则将所述初始语音识别文本中相匹配的第一词语进行剔除,得到的第一匹配结果为“我们要开会了”。For another example, suppose the initial speech recognition text is "this, we are going to have a meeting", and the first text database is preset to store the modal word "this", then the initial speech recognition text is matched with the preset first text database to determine The matched word is "this", and then judge whether the matched first word "this" is the modal particle to be deleted according to the pre-trained modal particle model based on the deep learning network. The network's modal particle model determines that the first matching word "this" belongs to the modal particle to be deleted in "this, we are going to have a meeting", and then the first matching word in the initial speech recognition text is eliminated. The first matching result obtained was "We are going to have a meeting."
具体地,所述将所述第一匹配结果与预设第二文本数据库进行匹配包括:Specifically, the matching the first matching result with a preset second text database includes:
1)将所述第一匹配结果中的词语转换为第一拼音;1) Convert the words in the first matching result to the first pinyin;
2)判断所述预设第二文本数据库中是否存在与所述第一拼音相同的第二拼音;2) Determine whether there is a second pinyin that is the same as the first pinyin in the preset second text database;
3)当确定所述预设第二文本数据库中存在与所述第一拼音相同的第二拼音时,将第二拼音对应的词语提取出来,作为第一拼音对应的词语。3) When it is determined that there is a second pinyin that is the same as the first pinyin in the preset second text database, the words corresponding to the second pinyin are extracted as the words corresponding to the first pinyin.
例如,假设第一匹配结果为“这是一个原始巨震”,将第一匹配结果中的词语转换为第一拼音为“zhe shi yige yuanshi juzhen”;预设第二文本数据库中储存有专业词“矩阵”及对应的第二拼音“juzheng”,则在确定所述预设第二文本数据库中存在与所述第一拼音相同的第二拼音时,将第二拼音“juzheng”对应的词语“矩阵”提取出来,作为第一拼音“juzheng”对应的词语,得到的第二匹配结果为“这是一个原始矩阵”。For example, suppose that the first matching result is "this is an original giant earthquake", and the words in the first matching result are converted to the first pinyin as "zhe shi yige yuanshi Juzhen"; the second text database is preset to store professional words "Matrix" and the corresponding second pinyin "juzheng", when it is determined that there is a second pinyin identical to the first pinyin in the preset second text database, the word "juzheng" corresponding to the second pinyin " "Matrix" is extracted as the word corresponding to the first pinyin "juzheng", and the second matching result obtained is "This is an original matrix".
本申请通过语音识别技术将经过分离处理后的语音转化为文本,作为初始语音识别文本;并将所述初始语音识别文本与预设文本数据库进行匹配,得到匹配后的语音识别文本,可以识别出语音中不同人说的话语的语音文本,方便记录人员进行信息汇整。This application converts the separated speech into text through speech recognition technology, as the initial speech recognition text; and matches the initial speech recognition text with a preset text database to obtain the matched speech recognition text, which can be recognized The voice text of the words spoken by different people in the voice is convenient for the recorder to gather information.
实施例3Example 3
图4为本申请电子设备7较佳实施例的示意图。FIG. 4 is a schematic diagram of a preferred embodiment of the electronic device 7 of this application.
所述电子设备7包括存储器71、处理器72以及存储在所述存储器71中并可在所述处理器72上运行的计算机程序73。所述处理器72执行所述计算机程序73时实现上述音频分离方法实施例中的步骤,例如图1所示的步骤S11~S14。即所述音频分离方法包括:获取语音;对所述语音进行噪声过滤;从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及对经过分离处理后的语音进行识别以获取所述语音的识别文本。或者,所述处理器72执行所述计算机程序73时实现上述音频分离装置实施例中各模块/单元的功能,例如图3中的模块401~404。The electronic device 7 includes a memory 71, a processor 72, and a computer program 73 that is stored in the memory 71 and can run on the processor 72. When the processor 72 executes the computer program 73, the steps in the above audio separation method embodiment are implemented, such as steps S11 to S14 shown in FIG. 1. That is, the audio separation method includes: acquiring speech; performing noise filtering on the speech; extracting voiceprint feature data from the filtered voice, and inputting the voiceprint feature data into a preset voice classification model for classification to obtain a classification result According to the classification result, the voice corresponding to the same voiceprint feature data is encoded and stored as a separate voice file to separate the voice; and the voice after the separation is recognized to obtain the information of the voice Recognize the text. Alternatively, when the processor 72 executes the computer program 73, the functions of the modules/units in the foregoing audio separation device embodiment are implemented, for example, the modules 401 to 404 in FIG. 3.
示例性的,所述计算机程序73可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器71中,并由所述处理器72执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,所述指令段用于描述所述计算机程序73在所述电 子设备7中的执行过程。例如,所述计算机程序73可以被分割成图3中的获取模块401、噪声过滤模块402、语音分离模块403及文本识别模块404,各模块具体功能参见实施例二。Exemplarily, the computer program 73 may be divided into one or more modules/units, and the one or more modules/units are stored in the memory 71 and executed by the processor 72 to complete This application. The one or more modules/units may be a series of computer program instruction segments capable of completing specific functions, and the instruction segments are used to describe the execution process of the computer program 73 in the electronic device 7. For example, the computer program 73 can be divided into an acquisition module 401, a noise filtering module 402, a speech separation module 403, and a text recognition module 404 in FIG. 3. For specific functions of each module, refer to the second embodiment.
本实施方式中,所述电子设备7与终端装置1为同一装置。在其他实施方式中,所述电子设备7可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。本领域技术人员可以理解,所述示意图仅仅是电子设备7的示例,并不构成对电子设备7的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述电子设备7还可以包括输入输出设备、网络接入设备、总线等。In this embodiment, the electronic device 7 and the terminal device 1 are the same device. In other embodiments, the electronic device 7 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server. Those skilled in the art can understand that the schematic diagram is only an example of the electronic device 7 and does not constitute a limitation on the electronic device 7. It may include more or less components than those shown in the figure, or a combination of certain components, or different components. Components, for example, the electronic device 7 may also include input and output devices, network access devices, buses, and the like.
所称处理器72可以是中央处理模块(Central Processing Unit,CPU),还可以是其他通用处理器、数字信号处理器(Digital Signal Processor,DSP)、专用集成电路(Application Specific Integrated Circuit,ASIC)、现成可编程门阵列(Field-Programmable Gate Array,FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者所述处理器72也可以是任何常规的处理器等,所述处理器72是所述电子设备7的控制中心,利用各种接口和线路连接整个电子设备7的各个部分。The so-called processor 72 may be a central processing module (Central Processing Unit, CPU), other general-purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), Ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, etc. The general-purpose processor can be a microprocessor or the processor 72 can also be any conventional processor, etc. The processor 72 is the control center of the electronic device 7 and connects the entire electronic device 7 with various interfaces and lines. Parts.
所述存储器71可用于存储所述计算机程序73和/或模块/单元,所述处理器72通过运行或执行存储在所述存储器71内的计算机程序和/或模块/单元,以及调用存储在存储器71内的数据,实现所述计电子设备7的各种功能。所述存储器71可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据电子设备7的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器71可以包括高速随机存取存储器,还可以包括非易失性存储器,例如硬盘、内存、插接式硬盘,智能存储卡(Smart Media Card,SMC),安全数字(Secure Digital,SD)卡,闪存卡(Flash Card)、至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 71 may be used to store the computer program 73 and/or modules/units. The processor 72 runs or executes the computer programs and/or modules/units stored in the memory 71 and calls the computer programs and/or modules/units stored in the memory 71. The data in 71 realizes various functions of the electronic device 7 described above. The memory 71 may mainly include a storage program area and a storage data area. The storage program area may store an operating system, an application program required by at least one function (such as a sound playback function, an image playback function, etc.), etc.; the storage data area may The data (such as audio data, phone book, etc.) created according to the use of the electronic device 7 is stored. In addition, the memory 71 may include a high-speed random access memory, and may also include a non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), and a Secure Digital (SD) Card, Flash Card, at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
所述电子设备7集成的模块/单元如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中,所述计算机可读存储介质可以是非易失性,也可以是易失性。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,所述计算机程序在被处理器执行时,可实现上述音频分离方法,即包括:获取语音;对所述语音进行噪声过滤;从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及对经过分离处理后的语音进行识别以 获取所述语音的识别文本。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。If the integrated module/unit of the electronic device 7 is implemented in the form of a software function module and sold or used as an independent product, it may be stored in a computer-readable storage medium, which may be non-easy. Loss of sex can also be volatile. Based on this understanding, this application implements all or part of the processes in the above-mentioned embodiments and methods, and can also be completed by instructing relevant hardware through a computer program. The computer program can be stored in a computer-readable storage medium. When the computer program is executed by the processor, the above audio separation method can be realized, which includes: acquiring speech; filtering the speech noise; extracting voiceprint feature data from the filtered voice, and inputting the voiceprint feature data Perform classification to a preset voice classification model to obtain a classification result, encode and store the voice corresponding to the same voiceprint feature data as a separate voice file according to the classification result, and perform separation processing on the voice; and The subsequent voice is recognized to obtain the recognized text of the voice. Wherein, the computer program includes computer program code, and the computer program code may be in the form of source code, object code, executable file, or some intermediate forms. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, U disk, mobile hard disk, magnetic disk, optical disk, computer memory, read-only memory (ROM, Read-Only Memory) , Random Access Memory (RAM, Random Access Memory), electrical carrier signal, telecommunications signal, and software distribution media, etc. It should be noted that the content contained in the computer-readable medium can be appropriately added or deleted in accordance with the requirements of the legislation and patent practice in the jurisdiction. For example, in some jurisdictions, according to the legislation and patent practice, the computer-readable medium Does not include electrical carrier signals and telecommunication signals.
在本申请所提供的几个实施例中,应该理解到,所揭露的电子设备和方法,可以通过其它的方式实现。例如,以上所描述的电子设备实施例仅仅是示意性的,例如,所述模块的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。In the several embodiments provided in this application, it should be understood that the disclosed electronic device and method may be implemented in other ways. For example, the electronic device embodiments described above are only illustrative. For example, the division of the modules is only a logical function division, and there may be other division methods in actual implementation.
另外,在本申请各个实施例中的各功能模块可以集成在相同处理模块中,也可以是各个模块单独物理存在,也可以两个或两个以上模块集成在相同模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用硬件加软件功能模块的形式实现。In addition, the functional modules in the various embodiments of the present application may be integrated in the same processing module, or each module may exist alone physically, or two or more modules may be integrated in the same module. The above-mentioned integrated modules can be implemented in the form of hardware, or in the form of hardware plus software functional modules.
最后应说明的是,以上实施例仅用以说明本申请的技术方案而非限制,尽管参照较佳实施例对本申请进行了详细说明,本领域的普通技术人员应当理解,可以对本申请的技术方案进行修改或等同替换,而不脱离本申请技术方案的精神和范围。Finally, it should be noted that the above embodiments are only used to illustrate the technical solutions of the application and not to limit them. Although the application has been described in detail with reference to the preferred embodiments, those of ordinary skill in the art should understand that the technical solutions of the application can be Modifications or equivalent replacements are made without departing from the spirit and scope of the technical solution of this application.

Claims (20)

  1. 一种音频分离方法,其中,所述方法包括:An audio separation method, wherein the method includes:
    获取语音;Get voice
    对所述语音进行噪声过滤;Noise filtering the voice;
    从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及Extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result. Store the voice as a separate voice file and separate the voice; and
    对经过分离处理后的语音进行识别以获取所述语音的识别文本。Recognizing the speech after the separation process to obtain the recognized text of the speech.
  2. 如权利要求1所述的音频分离方法,其中,所述预设语音分类模型的训练过程包括:The audio separation method according to claim 1, wherein the training process of the preset voice classification model comprises:
    获取正样本的声纹特征数据及负样本的声纹特征数据,并将正样本的声纹特征数据标注声纹特征类别,以使正样本的声纹特征数据携带声纹特征类别标签;Acquire the voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample, and label the voiceprint feature category of the voiceprint feature data of the positive sample, so that the voiceprint feature data of the positive sample carry the voiceprint feature category label;
    将所述正样本的声纹特征数据及所述负样本的声纹特征数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述预设语音分类模型,并利用所述验证集验证训练后的所述预设语音分类模型的准确率;The voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample are randomly divided into a training set of a first preset ratio and a verification set of a second preset ratio, and the training set is used to train the preset A voice classification model, and use the verification set to verify the accuracy of the preset voice classification model after training;
    若所述准确率大于或者等于预设准确率时,则结束训练,并以训练后的所述预设语音分类模型作为分类器识别所述声纹特征数据的类别;及If the accuracy rate is greater than or equal to the preset accuracy rate, the training is ended, and the preset voice classification model after training is used as a classifier to identify the category of the voiceprint feature data; and
    若所述准确率小于预设准确率时,则增加正样本数量及负样本数量以重新训练所述预设语音分类模型直至所述准确率大于或者等于预设准确率。If the accuracy rate is less than the preset accuracy rate, the number of positive samples and the number of negative samples are increased to retrain the preset voice classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
  3. 如权利要求1所述的音频分离方法,其中,所述对所述语音进行噪声过滤包括:The audio separation method according to claim 1, wherein said performing noise filtering on said voice comprises:
    从所述语音中选择语音分贝超过第一分贝阈值的语音信息作为环境噪声,并将语音分贝超过第一分贝阈值的所述环境噪声进行删除。The voice information whose voice decibel exceeds the first decibel threshold is selected from the voice as environmental noise, and the environmental noise whose voice decibel exceeds the first decibel threshold is deleted.
  4. 如权利要求1所述的音频分离方法,其中,所述对所述语音进行噪声过滤包括:The audio separation method according to claim 1, wherein said performing noise filtering on said voice comprises:
    建立机器学习及深度学习模型;建立声纹识别模型;将获取的所述语音通过所述机器学习及深度学习模型进行学习,对所述语音中的环境噪声进行识别区分;将经过所述机器学习及深度学习模型识别后的语音进行过滤,剔除掉所述语音中不属于人说话音频的环境噪声,得到经过初步筛查的语音;判断经过初步筛查的语音是否达到预设阈值;当确定经过初步筛查的语音达到预设阈值时,将达到预设阈值的语音与所述声纹识别模型进行对比提取, 保留与所述声纹识别模型相符合的语音频率及语谱图像,剔除掉与所述声纹识别模型不符合的语音,得到声纹降噪处理的语音。Establish a machine learning and deep learning model; establish a voiceprint recognition model; learn the acquired voice through the machine learning and deep learning model, and identify and distinguish the environmental noise in the voice; pass the machine learning And the speech after recognition by the deep learning model is filtered, and the environmental noise in the speech that is not human speech audio is removed, and the speech that has undergone preliminary screening is obtained; it is determined whether the speech after the preliminary screening reaches the preset threshold; When the preliminarily screened voice reaches the preset threshold, the voice that reaches the preset threshold is compared and extracted with the voiceprint recognition model, and the voice frequency and spectral image that are consistent with the voiceprint recognition model are retained, and those with The voiceprint recognition model does not conform to the voice, and the voiceprint noise reduction processed voice is obtained.
  5. 如权利要求1所述的音频分离方法,其中,所述对经过分离处理后的语音进行识别以获取所述语音的识别文本包括:The audio separation method according to claim 1, wherein said recognizing the speech after the separation process to obtain the recognized text of the speech comprises:
    通过语音识别将经过分离处理后的语音转化为文本,作为初始语音识别文本;及Convert the separated speech into text through speech recognition as the initial speech recognition text; and
    将所述初始语音识别文本与预设文本数据库进行匹配,得到匹配后的语音识别文本。The initial speech recognition text is matched with the preset text database to obtain the matched speech recognition text.
  6. 如权利要求5所述的音频分离方法,其中,所述通过语音识别将经过分离处理后的语音转化为文本包括:5. The audio separation method according to claim 5, wherein said converting the separated speech into text through speech recognition comprises:
    提取所述语音的音频特征,转换为预设长度的声学特征向量;Extracting audio features of the voice and converting them into acoustic feature vectors of preset length;
    根据解码算法将所述特征向量解码成词序;Decoding the feature vector into word order according to a decoding algorithm;
    通过HMM音素模型得到对应词序的子词,所述子词为声母和韵母;Obtain the subwords corresponding to the word order through the HMM phoneme model, where the subwords are initials and vowels;
    根据预设的发音字典将多个子词拼接成文字;及Combine multiple sub-words into text according to the preset pronunciation dictionary; and
    使用维特比算法解码得到最优序列,得到文本。Use the Viterbi algorithm to decode the optimal sequence and get the text.
  7. 如权利要求1所述的音频分离方法,其中,The audio separation method according to claim 1, wherein:
    所述声纹特征包括梅尔倒谱系数MFCC、感知线性预测系数PLP、深度特征Deep Feature以及能量规整谱系数PNCC。The voiceprint features include Mel cepstrum coefficient MFCC, perceptual linear prediction coefficient PLP, depth feature Deep Feature, and energy regularization spectrum coefficient PNCC.
  8. 一种音频分离装置,其中,所述装置包括:An audio separation device, wherein the device includes:
    获取模块,用于获取语音;Acquisition module, used to acquire voice;
    噪声过滤模块,用于对所述语音进行噪声过滤;A noise filtering module for performing noise filtering on the voice;
    语音分离模块,用于从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及The voice separation module is used to extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and classify the same voiceprint feature data according to the classification result The corresponding speech is encoded and stored as a separate speech file to separate the speech; and
    文本识别模块,用于对经过分离处理后的语音进行识别以获取所述语音的识别文本。The text recognition module is used to recognize the speech after the separation process to obtain the recognized text of the speech.
  9. 一种电子设备,其中,所述电子设备包括处理器,所述处理器用于执行存储器中存储的计算机程序时实现一种音频分离方法,所述音频分离方法包括:An electronic device, wherein the electronic device includes a processor configured to implement an audio separation method when executing a computer program stored in a memory, and the audio separation method includes:
    获取语音;Get voice
    对所述语音进行噪声过滤;Noise filtering the voice;
    从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及对经过分离处理后的语音进行识别以获取所述语音的识别文本。Extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result. The voice is stored as a separate voice file to perform separation processing; and the voice after the separation processing is recognized to obtain the recognized text of the voice.
  10. 如权利要求9所述的电子设备,其中,所述预设语音分类模型的训练过 程包括:The electronic device according to claim 9, wherein the training process of the preset voice classification model comprises:
    获取正样本的声纹特征数据及负样本的声纹特征数据,并将正样本的声纹特征数据标注声纹特征类别,以使正样本的声纹特征数据携带声纹特征类别标签;Acquire the voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample, and label the voiceprint feature category of the voiceprint feature data of the positive sample, so that the voiceprint feature data of the positive sample carry the voiceprint feature category label;
    将所述正样本的声纹特征数据及所述负样本的声纹特征数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述预设语音分类模型,并利用所述验证集验证训练后的所述预设语音分类模型的准确率;The voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample are randomly divided into a training set of a first preset ratio and a verification set of a second preset ratio, and the training set is used to train the preset A voice classification model, and use the verification set to verify the accuracy of the preset voice classification model after training;
    若所述准确率大于或者等于预设准确率时,则结束训练,并以训练后的所述预设语音分类模型作为分类器识别所述声纹特征数据的类别;及If the accuracy rate is greater than or equal to the preset accuracy rate, the training is ended, and the preset voice classification model after training is used as a classifier to identify the category of the voiceprint feature data; and
    若所述准确率小于预设准确率时,则增加正样本数量及负样本数量以重新训练所述预设语音分类模型直至所述准确率大于或者等于预设准确率。If the accuracy rate is less than the preset accuracy rate, the number of positive samples and the number of negative samples are increased to retrain the preset voice classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
  11. 如权利要求9所述的电子设备,其中,所述对所述语音进行噪声过滤包括:9. The electronic device of claim 9, wherein said performing noise filtering on said voice comprises:
    从所述语音中选择语音分贝超过第一分贝阈值的语音信息作为环境噪声,并将语音分贝超过第一分贝阈值的所述环境噪声进行删除。The voice information whose voice decibel exceeds the first decibel threshold is selected from the voice as environmental noise, and the environmental noise whose voice decibel exceeds the first decibel threshold is deleted.
  12. 如权利要求9所述的电子设备,其中,所述对所述语音进行噪声过滤包括:9. The electronic device of claim 9, wherein said performing noise filtering on said voice comprises:
    建立机器学习及深度学习模型;建立声纹识别模型;将获取的所述语音通过所述机器学习及深度学习模型进行学习,对所述语音中的环境噪声进行识别区分;将经过所述机器学习及深度学习模型识别后的语音进行过滤,剔除掉所述语音中不属于人说话音频的环境噪声,得到经过初步筛查的语音;判断经过初步筛查的语音是否达到预设阈值;当确定经过初步筛查的语音达到预设阈值时,将达到预设阈值的语音与所述声纹识别模型进行对比提取,保留与所述声纹识别模型相符合的语音频率及语谱图像,剔除掉与所述声纹识别模型不符合的语音,得到声纹降噪处理的语音。Establish a machine learning and deep learning model; establish a voiceprint recognition model; learn the acquired voice through the machine learning and deep learning model, and identify and distinguish the environmental noise in the voice; pass the machine learning And the speech after recognition by the deep learning model is filtered, and the environmental noise in the speech that is not human speech audio is removed, and the speech that has undergone preliminary screening is obtained; it is determined whether the speech after the preliminary screening reaches the preset threshold; When the preliminarily screened voice reaches the preset threshold, the voice that reaches the preset threshold is compared and extracted with the voiceprint recognition model, and the voice frequency and spectral image that are consistent with the voiceprint recognition model are retained, and those with The voiceprint recognition model does not conform to the voice, and the voiceprint noise reduction processed voice is obtained.
  13. 如权利要求9所述的电子设备,其中,所述对经过分离处理后的语音进行识别以获取所述语音的识别文本包括:9. The electronic device of claim 9, wherein the recognizing the speech after the separation process to obtain the recognized text of the speech comprises:
    通过语音识别将经过分离处理后的语音转化为文本,作为初始语音识别文本;及Convert the separated speech into text through speech recognition as the initial speech recognition text; and
    将所述初始语音识别文本与预设文本数据库进行匹配,得到匹配后的语音识别文本。The initial speech recognition text is matched with the preset text database to obtain the matched speech recognition text.
  14. 如权利要求13所述的电子设备,其中,所述通过语音识别将经过分离处理后的语音转化为文本包括:The electronic device according to claim 13, wherein said converting the speech after the separation process into text through speech recognition comprises:
    提取所述语音的音频特征,转换为预设长度的声学特征向量;Extracting audio features of the voice and converting them into acoustic feature vectors of preset length;
    根据解码算法将所述特征向量解码成词序;Decoding the feature vector into word order according to a decoding algorithm;
    通过HMM音素模型得到对应词序的子词,所述子词为声母和韵母;Obtain the subwords corresponding to the word order through the HMM phoneme model, where the subwords are initials and vowels;
    根据预设的发音字典将多个子词拼接成文字;及Combine multiple sub-words into text according to the preset pronunciation dictionary; and
    使用维特比算法解码得到最优序列,得到文本。Use the Viterbi algorithm to decode the optimal sequence and get the text.
  15. 一种计算机可读存储介质,其上存储有计算机程序,其中,所述计算机程序被处理器执行时实现所述音频分离方法,所述音频分离方法包括:A computer-readable storage medium having a computer program stored thereon, wherein the computer program is executed by a processor to implement the audio separation method, and the audio separation method includes:
    获取语音;Get voice
    对所述语音进行噪声过滤;Noise filtering the voice;
    从过滤后的语音中提取声纹特征数据,将所述声纹特征数据输入到预设语音分类模型进行分类得到分类结果,根据所述分类结果将相同的声纹特征数据对应的语音进行编码并存储为单独的语音文件而将所述语音进行分离处理;及对经过分离处理后的语音进行识别以获取所述语音的识别文本。Extract voiceprint feature data from the filtered voice, input the voiceprint feature data into a preset voice classification model for classification to obtain a classification result, and encode the voice corresponding to the same voiceprint feature data according to the classification result. The voice is stored as a separate voice file to perform separation processing; and the voice after the separation processing is recognized to obtain the recognized text of the voice.
  16. 如权利要求15所述的计算机可读存储介质,其中,所述预设语音分类模型的训练过程包括:15. The computer-readable storage medium of claim 15, wherein the training process of the preset speech classification model comprises:
    获取正样本的声纹特征数据及负样本的声纹特征数据,并将正样本的声纹特征数据标注声纹特征类别,以使正样本的声纹特征数据携带声纹特征类别标签;Acquire the voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample, and label the voiceprint feature category of the voiceprint feature data of the positive sample, so that the voiceprint feature data of the positive sample carry the voiceprint feature category label;
    将所述正样本的声纹特征数据及所述负样本的声纹特征数据随机分成第一预设比例的训练集和第二预设比例的验证集,利用所述训练集训练所述预设语音分类模型,并利用所述验证集验证训练后的所述预设语音分类模型的准确率;The voiceprint feature data of the positive sample and the voiceprint feature data of the negative sample are randomly divided into a training set of a first preset ratio and a verification set of a second preset ratio, and the training set is used to train the preset A voice classification model, and use the verification set to verify the accuracy of the preset voice classification model after training;
    若所述准确率大于或者等于预设准确率时,则结束训练,并以训练后的所述预设语音分类模型作为分类器识别所述声纹特征数据的类别;及If the accuracy rate is greater than or equal to the preset accuracy rate, the training is ended, and the preset voice classification model after training is used as a classifier to identify the category of the voiceprint feature data; and
    若所述准确率小于预设准确率时,则增加正样本数量及负样本数量以重新训练所述预设语音分类模型直至所述准确率大于或者等于预设准确率。If the accuracy rate is less than the preset accuracy rate, the number of positive samples and the number of negative samples are increased to retrain the preset voice classification model until the accuracy rate is greater than or equal to the preset accuracy rate.
  17. 如权利要求15所述的计算机可读存储介质,其中,所述对所述语音进行噪声过滤包括:15. The computer-readable storage medium of claim 15, wherein said performing noise filtering on said speech comprises:
    从所述语音中选择语音分贝超过第一分贝阈值的语音信息作为环境噪声,并将语音分贝超过第一分贝阈值的所述环境噪声进行删除。The voice information whose voice decibel exceeds the first decibel threshold is selected from the voice as environmental noise, and the environmental noise whose voice decibel exceeds the first decibel threshold is deleted.
  18. 如权利要求15所述的计算机可读存储介质,其中,所述对所述语音进行噪声过滤包括:15. The computer-readable storage medium of claim 15, wherein said performing noise filtering on said speech comprises:
    建立机器学习及深度学习模型;建立声纹识别模型;将获取的所述语音通过所述机器学习及深度学习模型进行学习,对所述语音中的环境噪声进行识别区分;将经过所述机器学习及深度学习模型识别后的语音进行过滤,剔除掉所述语音中不属于人说话音频的环境噪声,得到经过初步筛查的语音;判断经过初步筛查的语音是否达到预设阈值;当确定经过初步筛查的语音达到预设阈值时,将达到预设阈值的语音与所述声纹识别模型进行对比提取,保留与所述声纹识别模型相符合的语音频率及语谱图像,剔除掉与所述声纹识别模型不符合的语音,得到声纹降噪处理的语音。Establish a machine learning and deep learning model; establish a voiceprint recognition model; learn the acquired voice through the machine learning and deep learning model, and identify and distinguish the environmental noise in the voice; pass the machine learning And the speech after recognition by the deep learning model is filtered, and the environmental noise in the speech that is not human speech audio is removed, and the speech that has undergone preliminary screening is obtained; it is determined whether the speech after the preliminary screening reaches the preset threshold; When the preliminarily screened voice reaches the preset threshold, the voice that reaches the preset threshold is compared and extracted with the voiceprint recognition model, and the voice frequency and spectral image that are consistent with the voiceprint recognition model are retained, and those with The voiceprint recognition model does not conform to the voice, and the voiceprint noise reduction processed voice is obtained.
  19. 如权利要求15所述的计算机可读存储介质,其中,所述对经过分离处理后的语音进行识别以获取所述语音的识别文本包括:15. The computer-readable storage medium according to claim 15, wherein the recognizing the speech after the separation process to obtain the recognized text of the speech comprises:
    通过语音识别将经过分离处理后的语音转化为文本,作为初始语音识别文本;及Convert the separated speech into text through speech recognition as the initial speech recognition text; and
    将所述初始语音识别文本与预设文本数据库进行匹配,得到匹配后的语音识别文本。The initial speech recognition text is matched with the preset text database to obtain the matched speech recognition text.
  20. 如权利要求19所述的计算机可读存储介质,其中,所述通过语音识别将经过分离处理后的语音转化为文本包括:19. The computer-readable storage medium of claim 19, wherein said converting the separated speech into text through speech recognition comprises:
    提取所述语音的音频特征,转换为预设长度的声学特征向量;Extracting audio features of the voice and converting them into acoustic feature vectors of preset length;
    根据解码算法将所述特征向量解码成词序;Decoding the feature vector into word order according to a decoding algorithm;
    通过HMM音素模型得到对应词序的子词,所述子词为声母和韵母;Obtain the subwords corresponding to the word order through the HMM phoneme model, where the subwords are initials and vowels;
    根据预设的发音字典将多个子词拼接成文字;及Combine multiple sub-words into text according to the preset pronunciation dictionary; and
    使用维特比算法解码得到最优序列,得到文本。Use the Viterbi algorithm to decode the optimal sequence and get the text.
PCT/CN2020/086757 2019-07-25 2020-04-24 Audio separation method and apparatus, electronic device and computer-readable storage medium WO2021012734A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910678465.5A CN110473566A (en) 2019-07-25 2019-07-25 Audio separation method, device, electronic equipment and computer readable storage medium
CN201910678465.5 2019-07-25

Publications (1)

Publication Number Publication Date
WO2021012734A1 true WO2021012734A1 (en) 2021-01-28

Family

ID=68508340

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/086757 WO2021012734A1 (en) 2019-07-25 2020-04-24 Audio separation method and apparatus, electronic device and computer-readable storage medium

Country Status (2)

Country Link
CN (1) CN110473566A (en)
WO (1) WO2021012734A1 (en)

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110473566A (en) * 2019-07-25 2019-11-19 深圳壹账通智能科技有限公司 Audio separation method, device, electronic equipment and computer readable storage medium
CN111105801B (en) * 2019-12-03 2022-04-01 云知声智能科技股份有限公司 Role voice separation method and device
US11303464B2 (en) * 2019-12-05 2022-04-12 Microsoft Technology Licensing, Llc Associating content items with images captured of meeting content
CN113035225B (en) * 2019-12-09 2023-02-28 中国科学院自动化研究所 Visual voiceprint assisted voice separation method and device
CN111081275B (en) * 2019-12-20 2023-05-26 惠州Tcl移动通信有限公司 Terminal processing method and device based on sound analysis, storage medium and terminal
CN110970036B (en) * 2019-12-24 2022-07-12 网易(杭州)网络有限公司 Voiceprint recognition method and device, computer storage medium and electronic equipment
CN111243620B (en) * 2020-01-07 2022-07-19 腾讯科技(深圳)有限公司 Voice separation model training method and device, storage medium and computer equipment
CN111489756B (en) * 2020-03-31 2024-03-01 中国工商银行股份有限公司 Voiceprint recognition method and device
CN111462754B (en) * 2020-04-16 2022-08-09 深圳航天科创实业有限公司 Method for establishing dispatching control voice recognition model of power system
CN111552777B (en) * 2020-04-24 2023-09-26 北京达佳互联信息技术有限公司 Audio identification method and device, electronic equipment and storage medium
CN111627457A (en) * 2020-05-13 2020-09-04 广州国音智能科技有限公司 Voice separation method, system and computer readable storage medium
CN111768801A (en) * 2020-06-12 2020-10-13 瑞声科技(新加坡)有限公司 Airflow noise eliminating method and device, computer equipment and storage medium
CN111785291A (en) * 2020-07-02 2020-10-16 北京捷通华声科技股份有限公司 Voice separation method and voice separation device
CN111968657B (en) * 2020-08-17 2022-08-16 北京字节跳动网络技术有限公司 Voice processing method and device, electronic equipment and computer readable medium
CN112084746A (en) * 2020-09-11 2020-12-15 广东电网有限责任公司 Entity identification method, system, storage medium and equipment
CN112102854A (en) * 2020-09-22 2020-12-18 福建鸿兴福食品有限公司 Recording filtering method and device and computer readable storage medium
CN112233694B (en) * 2020-10-10 2024-03-05 中国电子科技集团公司第三研究所 Target identification method and device, storage medium and electronic equipment
CN112242137B (en) * 2020-10-15 2024-05-17 上海依图网络科技有限公司 Training of human voice separation model and human voice separation method and device
CN112792849B (en) * 2021-01-06 2022-07-26 厦门攸信信息技术有限公司 Collision detection method, robot, mobile terminal and storage medium
CN112634875B (en) * 2021-03-04 2021-06-08 北京远鉴信息技术有限公司 Voice separation method, voice separation device, electronic device and storage medium
CN112992153B (en) * 2021-04-27 2021-08-17 太平金融科技服务(上海)有限公司 Audio processing method, voiceprint recognition device and computer equipment
CN112989107B (en) * 2021-05-18 2021-07-30 北京世纪好未来教育科技有限公司 Audio classification and separation method and device, electronic equipment and storage medium
CN113314144A (en) * 2021-05-19 2021-08-27 中国南方电网有限责任公司超高压输电公司广州局 Voice recognition and power equipment fault early warning method, system, terminal and medium
CN113314108B (en) * 2021-06-16 2024-02-13 深圳前海微众银行股份有限公司 Method, apparatus, device, storage medium and program product for processing voice data
CN113505612A (en) * 2021-07-23 2021-10-15 平安科技(深圳)有限公司 Multi-person conversation voice real-time translation method, device, equipment and storage medium
CN113539292A (en) * 2021-07-28 2021-10-22 联想(北京)有限公司 Voice separation method and device
CN116504246B (en) * 2023-06-26 2023-11-24 深圳市矽昊智能科技有限公司 Voice remote control method, device, storage medium and device based on Bluetooth device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103811020A (en) * 2014-03-05 2014-05-21 东北大学 Smart voice processing method
CN105719659A (en) * 2016-02-03 2016-06-29 努比亚技术有限公司 Recording file separation method and device based on voiceprint identification
CN108198569A (en) * 2017-12-28 2018-06-22 北京搜狗科技发展有限公司 A kind of audio-frequency processing method, device, equipment and readable storage medium storing program for executing
CN108831440A (en) * 2018-04-24 2018-11-16 中国地质大学(武汉) A kind of vocal print noise-reduction method and system based on machine learning and deep learning
CN108922557A (en) * 2018-06-14 2018-11-30 北京联合大学 A kind of the multi-person speech separation method and system of chat robots
CN109065051A (en) * 2018-09-30 2018-12-21 珠海格力电器股份有限公司 A kind of voice recognition processing method and device
CN109545228A (en) * 2018-12-14 2019-03-29 厦门快商通信息技术有限公司 A kind of end-to-end speaker's dividing method and system
CN110473566A (en) * 2019-07-25 2019-11-19 深圳壹账通智能科技有限公司 Audio separation method, device, electronic equipment and computer readable storage medium

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101404160B (en) * 2008-11-21 2011-05-04 北京科技大学 Voice denoising method based on audio recognition
CN103871413A (en) * 2012-12-13 2014-06-18 上海八方视界网络科技有限公司 Men and women speaking voice classification method based on SVM and HMM mixing model
US20170061978A1 (en) * 2014-11-07 2017-03-02 Shannon Campbell Real-time method for implementing deep neural network based speech separation
CN105427858B (en) * 2015-11-06 2019-09-03 科大讯飞股份有限公司 Realize the method and system that voice is classified automatically
CN106971737A (en) * 2016-01-14 2017-07-21 芋头科技(杭州)有限公司 A kind of method for recognizing sound-groove spoken based on many people
CN106782565A (en) * 2016-11-29 2017-05-31 重庆重智机器人研究院有限公司 A kind of vocal print feature recognition methods and system
FR3067511A1 (en) * 2017-06-09 2018-12-14 Orange SOUND DATA PROCESSING FOR SEPARATION OF SOUND SOURCES IN A MULTI-CHANNEL SIGNAL
CN108564952B (en) * 2018-03-12 2019-06-07 新华智云科技有限公司 The method and apparatus of speech roles separation
CN109272993A (en) * 2018-08-21 2019-01-25 中国平安人寿保险股份有限公司 Recognition methods, device, computer equipment and the storage medium of voice class
CN109065075A (en) * 2018-09-26 2018-12-21 广州势必可赢网络科技有限公司 A kind of method of speech processing, device, system and computer readable storage medium
CN109256150B (en) * 2018-10-12 2021-11-30 北京创景咨询有限公司 Speech emotion recognition system and method based on machine learning
CN109920435B (en) * 2019-04-09 2021-04-06 厦门快商通信息咨询有限公司 Voiceprint recognition method and voiceprint recognition device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103811020A (en) * 2014-03-05 2014-05-21 东北大学 Smart voice processing method
CN105719659A (en) * 2016-02-03 2016-06-29 努比亚技术有限公司 Recording file separation method and device based on voiceprint identification
CN108198569A (en) * 2017-12-28 2018-06-22 北京搜狗科技发展有限公司 A kind of audio-frequency processing method, device, equipment and readable storage medium storing program for executing
CN108831440A (en) * 2018-04-24 2018-11-16 中国地质大学(武汉) A kind of vocal print noise-reduction method and system based on machine learning and deep learning
CN108922557A (en) * 2018-06-14 2018-11-30 北京联合大学 A kind of the multi-person speech separation method and system of chat robots
CN109065051A (en) * 2018-09-30 2018-12-21 珠海格力电器股份有限公司 A kind of voice recognition processing method and device
CN109545228A (en) * 2018-12-14 2019-03-29 厦门快商通信息技术有限公司 A kind of end-to-end speaker's dividing method and system
CN110473566A (en) * 2019-07-25 2019-11-19 深圳壹账通智能科技有限公司 Audio separation method, device, electronic equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN110473566A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
WO2021012734A1 (en) Audio separation method and apparatus, electronic device and computer-readable storage medium
WO2021208287A1 (en) Voice activity detection method and apparatus for emotion recognition, electronic device, and storage medium
CN107680582B (en) Acoustic model training method, voice recognition method, device, equipment and medium
CN110211565B (en) Dialect identification method and device and computer readable storage medium
WO2021128741A1 (en) Voice emotion fluctuation analysis method and apparatus, and computer device and storage medium
WO2021047319A1 (en) Voice-based personal credit assessment method and apparatus, terminal and storage medium
CN112233680B (en) Speaker character recognition method, speaker character recognition device, electronic equipment and storage medium
KR101068122B1 (en) Apparatus and method for rejection based garbage and anti-word model in a speech recognition
Chauhan et al. Speaker recognition and verification using artificial neural network
CN106297769B (en) A kind of distinctive feature extracting method applied to languages identification
Biagetti et al. Speaker identification in noisy conditions using short sequences of speech frames
Shareef et al. Gender voice classification with huge accuracy rate
CN108665901B (en) Phoneme/syllable extraction method and device
CN114360514A (en) Speech recognition method, apparatus, device, medium, and product
CN113112992B (en) Voice recognition method and device, storage medium and server
Nirjon et al. sMFCC: exploiting sparseness in speech for fast acoustic feature extraction on mobile devices--a feasibility study
Obaid et al. Small vocabulary isolated-word automatic speech recognition for single-word commands in Arabic spoken
CN113658596A (en) Semantic identification method and semantic identification device
Hizlisoy et al. Text independent speaker recognition based on MFCC and machine learning
CN116110370A (en) Speech synthesis system and related equipment based on man-machine speech interaction
CN115547345A (en) Voiceprint recognition model training and related recognition method, electronic device and storage medium
CN114203160A (en) Method, device and equipment for generating sample data set
CN114420136A (en) Method and device for training voiceprint recognition model and storage medium
Balpande et al. Speaker recognition based on mel-frequency cepstral coefficients and vector quantization
Tzudir et al. Low-resource dialect identification in Ao using noise robust mean Hilbert envelope coefficients

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20845047

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20845047

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 08.08.2022)

122 Ep: pct application non-entry in european phase

Ref document number: 20845047

Country of ref document: EP

Kind code of ref document: A1