WO2023283823A1 - 语音对抗样本检测方法、装置、设备及计算机可读存储介质 - Google Patents
语音对抗样本检测方法、装置、设备及计算机可读存储介质 Download PDFInfo
- Publication number
- WO2023283823A1 WO2023283823A1 PCT/CN2021/106236 CN2021106236W WO2023283823A1 WO 2023283823 A1 WO2023283823 A1 WO 2023283823A1 CN 2021106236 W CN2021106236 W CN 2021106236W WO 2023283823 A1 WO2023283823 A1 WO 2023283823A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- speech
- sample
- adversarial
- training
- samples
- Prior art date
Links
- 238000012360 testing method Methods 0.000 title abstract 4
- 238000012549 training Methods 0.000 claims abstract description 149
- 238000000034 method Methods 0.000 claims abstract description 76
- 238000013528 artificial neural network Methods 0.000 claims abstract description 51
- 238000000605 extraction Methods 0.000 claims abstract description 24
- 238000001514 detection method Methods 0.000 claims description 114
- 230000006870 function Effects 0.000 claims description 68
- 239000012634 fragment Substances 0.000 claims description 22
- 238000004891 communication Methods 0.000 claims description 17
- 238000006243 chemical reaction Methods 0.000 claims description 4
- 230000008569 process Effects 0.000 description 13
- 238000004590 computer program Methods 0.000 description 11
- 238000001228 spectrum Methods 0.000 description 10
- 238000013527 convolutional neural network Methods 0.000 description 8
- 230000002411 adverse Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 238000004519 manufacturing process Methods 0.000 description 7
- 238000013519 translation Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 235000000332 black box Nutrition 0.000 description 4
- 238000013473 artificial intelligence Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 3
- 238000012937 correction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000011478 gradient descent method Methods 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000010183 spectrum analysis Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/01—Assessment or evaluation of speech recognition systems
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/06—Creation of reference templates; Training of speech recognition systems, e.g. adaptation to the characteristics of the speaker's voice
- G10L15/063—Training
Definitions
- the embodiments of the present invention relate to the technical field of artificial intelligence, and in particular to a speech adversarial sample detection method, a speech adversarial sample detection model training method, a device, a device, and a computer-readable storage medium.
- the goal of the speech recognition system is to translate a certain speech into text, that is, a speech-to-text translation process.
- end-to-end speech recognition systems based on deep learning have gradually become popular in the market.
- adversarial sample technology makes end-to-end ASR a security problem.
- Current adversarial machine learning techniques can generate adversarial samples of speech. By adding carefully crafted perturbations to the audio, ASR can be consciously "mistranslated", but the human ear cannot detect it.
- the inventors of the present application found that in the process of implementing the present invention, the existing adversarial example technology can change the recognition results of the recognition system almost unlimitedly according to the idea of the tamperer.
- embodiments of the present invention provide a speech adversarial sample detection method, a speech adversarial sample detection model training method, a device, a device, and a computer-readable storage medium, which are used to solve the difficulty in adversarial speech data existing in the prior art. Identify technical issues.
- a method for training a speech adversarial sample detection model comprising:
- the voice training samples include a plurality of normal voice samples and a plurality of confrontation voice samples; the confrontation voice samples are negative samples of tampering semantics;
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively;
- the positive sample spectrogram and the negative sample spectrogram are respectively input into a preset neural network for training to obtain a speech adversarial sample detection model.
- the acquiring speech training samples, the speech training samples including normal speech samples and adversarial speech samples includes: obtaining original normal speech samples; according to the original normal speech samples, generating against speech samples;
- the objective function is: min ⁇ 2 +l(x'+ ⁇ ,t)stdb( ⁇ ) ⁇ T
- ⁇ represents the adversarial perturbation
- x′ is the original normal speech sample
- t is the target sentence
- l is the CTC loss
- the degree of distortion is represented by decibels db( ⁇ )
- the degree of distortion represents the relative loudness of the audio on a logarithmic scale
- T represents Threshold on the energy magnitude of an adversarial perturbation.
- performing spectrogram feature extraction on the speech training samples to obtain positive sample spectrograms and negative sample spectrograms respectively includes: dividing each of the speech training samples into A plurality of small speech fragments; the plurality of small speech fragments are loaded with a truncated window function to obtain a plurality of windowed speech fragments; the plurality of windowed speech fragments are respectively short-time Fourier transformed to obtain each A plurality of spectrograms corresponding to the speech training samples.
- the truncated window function is a Hanning window function; the loading of the plurality of small speech segments into the truncated window function to obtain a plurality of windowed small speech segments includes: Load the Hanning window function on the above-mentioned small voice fragments to obtain multiple windowed small voice fragments.
- the input of the positive sample spectrogram and the negative sample spectrogram into the preset neural network for training to obtain the speech adversarial sample detection model includes: The positive sample spectrogram and the negative sample spectrogram are respectively input into the preset neural network for training, and a prediction result is obtained; according to the label of the positive sample spectrogram, the label of the negative sample spectrogram, and the prediction Calculate the energy loss function as a result; adjust the parameters of the preset neural network according to the energy loss function, and re-input the positive sample spectrogram and the negative sample spectrogram into the preset neural network respectively, and calculate the energy loss function, and adjust the parameters of the preset neural network until the energy loss function converges or reaches a preset threshold to obtain a speech adversarial sample detection model.
- the energy loss function is:
- E ⁇ (Y, x) -Y ⁇ F ⁇ (x);
- ⁇ is the parameter of the preset neural network, Y is the label of the speech training sample; x is the speech training sample; ⁇ is normal number.
- a speech adversarial example detection method including:
- the spectrogram to be detected is input into a speech adversarial sample detection model; the speech adversarial sample detection model is trained according to the training method;
- a training device for a speech adversarial sample detection model including:
- the first obtaining module is used to obtain speech training samples, and the speech training samples include normal speech samples and confrontation speech samples; the confrontation speech samples are negative samples of tampering semantics;
- the extraction module is used to perform spectrogram feature extraction on the speech training samples to obtain positive sample spectrograms and negative sample spectrograms respectively;
- the training module is used to input the positive sample spectrogram and the negative sample spectrogram respectively into the preset neural network for training to obtain a speech adversarial sample detection model.
- a speech adversarial sample detection device including:
- the second obtaining module is used to obtain the voice data to be detected
- a conversion module configured to convert the speech data to be detected into spectrograms to be detected
- the detection module is used to input the spectrogram to be detected into the speech adversarial sample detection model;
- the speech adversarial sample detection model is based on the training method of the speech adversarial sample detection model or the training device of the speech adversarial sample detection model trained to get;
- An output module configured to output the detection result of the speech data to be detected.
- a computing device including: a processor, a memory, a communication interface, and a communication bus, and the processor, the memory, and the communication interface complete the mutual communication via the communication bus. communication among them; the memory is used to store at least one executable instruction, and the executable instruction causes the processor to execute the operation of the training method of the speech adversarial sample detection model or the operation of the speech adversarial sample detection method .
- a computer-readable storage medium stores at least one executable instruction, and when the executable instruction is run on the computing device, the computing device executes the The above-mentioned training method of speech adversarial sample detection model or the operation of the above-mentioned speech adversarial sample detection method.
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively, and the positive sample spectrograms and negative sample spectrograms are obtained Figures are respectively input into the preset neural network for training to obtain the speech adversarial sample detection model, so as to form an automatic speech adversarial sample detection tool, and improve the identification efficiency of fictional speech under the premise of ensuring high judgment accuracy, which is ASR
- the security provides front-end guarantees to prevent adverse effects caused by intentionally manufacturing adversarial examples that tamper with semantics.
- Figure 1 shows a schematic diagram of speech recognition errors caused by adding "anti-noise disturbance" provided by the embodiment of the present invention
- FIG. 2 shows a schematic flowchart of a training method for a speech adversarial sample detection model provided by an embodiment of the present invention
- Fig. 3 is a schematic flowchart of a method for detecting speech adversarial samples provided by an embodiment of the present invention
- Fig. 4 shows the structural representation of the training device of the speech adversarial sample detection model that the embodiment of the present invention provides
- FIG. 5 shows a schematic structural diagram of a detection device for speech adversarial samples provided by an embodiment of the present invention
- Fig. 6 shows a schematic structural diagram of a computing device provided by an embodiment of the present invention.
- ASR Automatic Speech Recognition, automatic speech recognition system.
- Waveform signal Time-domain waveform signal of speech.
- Adversarial perturbation is noise added to clean speech to make it a speech adversarial example.
- Adversarial examples samples that deceive neural networks by adding perturbations that are imperceptible to humans.
- STFT Short-time Fourier transform
- Spectrogram spectrogram, speech spectrum analysis view.
- Multispectrum Multiple STFT spectrograms of a speech signal.
- CNN Convolutional Neural Network
- EBM energy-based model, a model based on energy functions.
- the spectrum on the left is normal speech, and the recognition result is "I'm going out to play today"; the spectrum in the middle is anti-noise disturbance; the spectrum on the right is the normal speech spectrum on the left and anti-noise disturbance in the middle
- the spectrogram is a speech adversarial sample that tampers with semantics.
- the speech recognition result of the spectrogram on the right is "I'm staying at home today.” It can be seen from the spectrogram that the spectra of the left and right images are very similar, but the semantics are completely different.
- the generation methods of speech adversarial samples that tamper with semantics are roughly divided into white-box attacks and black-box attacks.
- White-box attacks assume that the model parameters can be accessed, and apply gradient-related data corrections in a targeted manner to achieve the purpose of modifying the original data; in black-box attacks, the attacker cannot access the internal information of the model, and usually uses heuristics to add noise and Constantly adjust the added noise to achieve the purpose of modifying the original data.
- the inventors of the present application found through analysis: compared with normal speech, the STFT spectrogram obtained by the speech whose semantics has been tampered with, the coherence estimate (coherence estimate) and the cross-spectrum phase (cross-spectrum phase) in different frequency bands The following rules are shown: the higher the frequency part, the lower the consistency, and the greater the phase change of the cross spectrum.
- the embodiment of the present invention uses the STFT spectrogram as a feature input to the convolutional neural network based on the characteristic difference between the normal speech sample and the speech confrontation sample in the STFT spectrogram, and uses an energy-based model for classification, so that accurate Identify adversarial examples.
- Fig. 2 shows a flowchart of a method for training a speech adversarial example detection model provided by an embodiment of the present invention, the method being executed by a computing device.
- the computing device may be a computer device, such as a personal computer, a desktop computer, a tablet computer, etc.; it may also be other artificial intelligence devices or terminals, such as a robot, a mobile phone, etc., which are not specifically limited in the embodiment of the present invention.
- the method includes the following steps:
- Step 110 Obtain speech training samples, the speech training samples include a plurality of normal speech samples and a plurality of adversarial speech samples; the adversarial speech samples are negative samples of tampering semantics.
- the adversarial speech sample is a negative sample that tampers with semantics, which refers to the speech that has been added with anti-noise perturbation to tamper with semantics.
- the speech adversarial samples with tampered semantics can be generated by means of white-box attack or black-box attack.
- White-box attacks (such as Taori Attack, an adversarial sample attack algorithm proposed by Taori et al. in 2019) assume that model parameters can be accessed, and gradient-related data corrections are applied in a targeted manner to achieve the purpose of modifying the original data.
- C&W Attack is the adversarial sample attack algorithm proposed by Carlini and Wagner
- the attacker cannot access the internal information of the model, and uses heuristics to add noise, and continuously adjusts the added noise to achieve modification. Purpose of Raw Data.
- the specific process of generating speech adversarial samples through the method of C&W Attack may be: obtaining the original normal speech samples; generating the adversarial speech samples through the objective function according to the original normal speech samples;
- the objective function is:
- ⁇ represents the adversarial perturbation
- x′ is the original normal speech sample
- t is the target sentence
- l is the CTC loss
- the distortion is represented by decibel db( ⁇ )
- the distortion represents the relative loudness of the audio on the logarithmic scale
- ⁇ 2 represents the two-norm
- st represents the constraint
- T represents the threshold value of the energy of the adversarial disturbance
- the threshold value can be set correspondingly according to the specific scene in the embodiment of the present invention.
- the target sentence refers to a sentence whose semantics has been tampered with corresponding to the speech adversarial sample.
- the speech confrontation samples can be obtained through the gradient descent method according to the objective function set above.
- Step 120 Perform spectrogram feature extraction on the speech training samples to obtain spectrograms of positive samples and spectrograms of negative samples respectively.
- the embodiment of the present invention divides the speech training sample into a plurality of short time frames, and the length of the short time frame can be is a few hundred milliseconds. Specifically, each of the speech training samples is divided into a plurality of small speech segments; the plurality of small speech segments are loaded with a truncated window function to obtain a plurality of small windowed speech segments; Short-time Fourier transform is performed on the small segment to obtain multiple spectrograms corresponding to each of the speech training samples, thereby converting the speech training samples into the frequency domain.
- the conversion process is:
- P t (w, m) is the spectrogram corresponding to each small speech segment
- w[m] is a window sequence with a length of L
- the truncated window function is a Hanning window function.
- Step 130 input the positive sample spectrogram and the negative sample spectrogram respectively into a preset neural network for training to obtain a speech adversarial sample detection model.
- a label is added to each positive sample spectrogram and negative sample spectrogram, wherein the label of the positive sample spectrogram can be set to 1, and the negative sample spectrogram
- the spectrum is labeled -1.
- the labeled positive sample spectrogram and negative sample spectrogram are respectively input into the preset neural network for iterative training, so as to obtain a speech adversarial sample detection model.
- the preset neural network is a convolutional neural network
- the convolutional neural network includes a convolutional layer, a downsampling layer, a fully connected layer and an output layer.
- it may specifically be: 3 convolutional layers and 2 down-sampling layers are combined alternately, and then 3 fully connected layers are connected, and the final output layer is a node, and the node value of the output layer is Represents the output value of the model.
- the convolutional neural network can be iteratively trained through the energy loss function, specifically:
- the positive sample spectrogram with labels and the negative sample spectrogram with labels are respectively input into the preset neural network for training, and a prediction result is obtained.
- For the speech training sample x of each model let its label be Y ⁇ (1,-1). First input it into the model to get the output value of the model, denoted as F ⁇ (x). Wherein, when the output F ⁇ (x) of the speech adversarial sample detection model is greater than 0, it is judged as a positive sample spectrogram, and when F ⁇ (x) is less than 0, it is judged as a negative sample spectrogram.
- An energy loss function is calculated according to the label of the positive sample spectrogram, the label of the negative sample spectrogram, and the prediction result.
- ⁇ is a parameter of the preset neural network
- Y is a label of the speech training sample
- x is the speech training sample
- the parameters of the neural network are set, and iterative training is performed until the energy loss function converges or reaches a preset threshold, and the optimal parameters are obtained, thereby obtaining the final speech adversarial sample detection model.
- the parameter ⁇ when adjusting the parameters of the preset neural network according to the obtained energy loss function, the parameter ⁇ may be adjusted by a gradient descent method.
- each speech training sample is calculated The probability of being a positive sample or a negative sample, so as to determine whether the speech training sample is a normal speech or a speech adversarial sample.
- the probability can be calculated through the softmax function: output the output result of the spectrogram of each small segment as a positive sample or a negative sample through the speech adversarial sample detection model, and use the softmax function to count the spectrogram as a positive sample or a negative sample If more of the N speech segments of the speech training sample are judged as positive samples, the speech sample is considered to be a normal speech sample; otherwise, the speech segment is considered to be a speech adversarial sample.
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively, and the positive sample spectrograms and negative sample spectrograms are obtained Figures are respectively input into the preset neural network for training to obtain the speech adversarial sample detection model, so as to form an automatic speech adversarial sample detection tool, and improve the identification efficiency of fictional speech under the premise of ensuring high judgment accuracy, which is ASR
- the security provides front-end guarantees to prevent adverse effects caused by intentionally manufacturing adversarial examples that tamper with semantics.
- Fig. 3 shows a flow chart of a method for detecting speech adversarial samples provided by another embodiment of the present invention, the method being executed by a computing device.
- the computing device may be a computer device, such as a personal computer, a desktop computer, a tablet computer, etc.; it may also be other artificial intelligence devices or terminals, such as a robot, a mobile phone, etc., which are not specifically limited in the embodiment of the present invention.
- the method includes the following steps:
- Step 210 Obtain voice data to be detected.
- Step 220 Convert the speech data to be detected into spectrograms to be detected.
- the voice data to be detected needs to be divided into a plurality of small voice segments to be detected according to preset rules; the multiple small voice segments to be detected are loaded with a truncation window function to obtain A plurality of small speech fragments to be detected with windowing; respectively performing short-time Fourier transform on the small fragments of speech with windowing to be detected to obtain a plurality of spectrograms to be detected corresponding to the speech data to be detected.
- This process is roughly the same as the process of converting to a spectrogram during the training process of the speech adversarial sample detection model, and will not be repeated here.
- Step 230 Input the spectrogram to be detected into a speech adversarial sample detection model; the speech adversarial sample detection model is trained according to the training method.
- the output results of each spectrogram to be detected are obtained, and the softmax function is used to calculate the detection result of the speech data to be detected.
- Step 240 Output the detection result of the speech data to be detected.
- the detection result is an adversarial voice representing whether the voice data to be detected is normal voice or tampered semantics, and the detection result is output to the user, so that the user can perform corresponding operations according to the detection result.
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively, and the positive sample spectrograms and negative sample spectrograms are obtained Figures are respectively input into the preset neural network for training to obtain the speech adversarial sample detection model, so as to form an automatic speech adversarial sample detection tool, and improve the identification efficiency of fictional speech under the premise of ensuring high judgment accuracy, which is ASR
- the security provides front-end guarantees to prevent adverse effects caused by intentionally manufacturing adversarial examples that tamper with semantics.
- FIG. 4 shows a schematic structural diagram of a training device for a speech adversarial sample detection model provided by an embodiment of the present invention.
- the device 300 includes: a first acquisition module 310 , an extraction module 320 and a training module 330 .
- the first acquiring module 310 is configured to acquire voice training samples, the voice training samples include normal voice samples and adversarial voice samples; the adversarial voice samples are negative samples of tampering semantics.
- the extraction module 320 is configured to perform spectrogram feature extraction on the speech training samples to obtain positive sample spectrograms and negative sample spectrograms respectively.
- the training module 330 is configured to input the positive sample spectrogram and the negative sample spectrogram into the preset neural network for training to obtain the speech adversarial sample detection model.
- the specific working process of the training device for the speech adversarial sample detection model of the embodiment of the present invention is substantially the same as the method steps of the above embodiment of the training method for the speech adversarial sample detection model, and will not be repeated here.
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively, and the positive sample spectrograms and negative sample spectrograms are obtained Figures are respectively input into the preset neural network for training to obtain the speech adversarial sample detection model, so as to form an automatic speech adversarial sample detection tool, and improve the identification efficiency of fictional speech under the premise of ensuring high judgment accuracy, which is ASR
- the security provides front-end guarantees to prevent adverse effects caused by intentionally manufacturing adversarial examples that tamper with semantics.
- FIG. 5 shows a schematic structural diagram of an apparatus for detecting speech adversarial samples provided by an embodiment of the present invention.
- the device 400 includes:
- the second acquiring module 410 is configured to acquire voice data to be detected.
- a conversion module 420 configured to convert the speech data to be detected into spectrograms to be detected.
- the detection module 430 is configured to input the spectrogram to be detected into the speech adversarial sample detection model.
- the speech adversarial sample detection model is trained according to the above-mentioned speech adversarial sample detection model training method or the above-mentioned speech adversarial sample detection model training device.
- the output module 440 is configured to output the detection result of the speech data to be detected.
- the specific working process of the device for detecting speech adversarial samples in the embodiment of the present invention is generally the same as the steps of the method for detecting speech adversarial samples in the above method embodiment, and will not be repeated here.
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively, and the positive sample spectrograms and negative sample spectrograms are obtained Figures are respectively input into the preset neural network for training to obtain the speech adversarial sample detection model, so as to form an automatic speech adversarial sample detection tool, and improve the identification efficiency of fictional speech under the premise of ensuring high judgment accuracy, which is ASR
- the security provides front-end guarantees to prevent adverse effects caused by intentionally manufacturing adversarial examples that tamper with semantics.
- FIG. 6 shows a schematic structural diagram of a computing device provided by an embodiment of the present invention, and the specific embodiment of the present invention does not limit the specific implementation of the computing device.
- the computing device may include: a processor (processor) 502, a communication interface (Communications Interface) 504, a memory (memory) 506, and a communication bus 508.
- the processor 502 , the communication interface 504 , and the memory 506 communicate with each other through the communication bus 508 .
- the communication interface 504 is configured to communicate with network elements of other devices such as clients or other servers.
- the processor 502 is configured to execute the program 510, and specifically, may execute the relevant steps in the above embodiment of the training method for the speech adversarial sample detection model or the speech adversarial sample detection method.
- the program 510 may include program codes including computer-executable instructions.
- the processor 502 may be a central processing unit CPU, or an ASIC (Application Specific Integrated Circuit), or one or more integrated circuits configured to implement the embodiments of the present invention.
- the one or more processors included in the computing device may be of the same type, such as one or more CPUs, or may be different types of processors, such as one or more CPUs and one or more ASICs.
- the memory 506 is used for storing the program 510 .
- the memory 506 may include a high-speed RAM memory, and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory.
- the program 510 may be invoked by the processor 502 to cause the computing device to perform the following operations:
- the voice training samples include a plurality of normal voice samples and a plurality of confrontation voice samples; the confrontation voice samples are negative samples of tampering semantics;
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively;
- the program 510 may be invoked by the processor 502 to cause the computing device to perform the following operations:
- the spectrogram to be detected is input into the speech adversarial sample detection model; the speech adversarial sample detection model is trained according to the above-mentioned training method;
- the acquiring speech training samples, the speech training samples including normal speech samples and adversarial speech samples includes: obtaining original normal speech samples; according to the original normal speech samples, generating against speech samples;
- the objective function is: min ⁇ 2 +l(x'+ ⁇ ,t)stdb( ⁇ ) ⁇ T
- ⁇ represents the adversarial perturbation
- x′ is the original normal speech sample
- t is the target sentence
- l is the CTC loss
- the degree of distortion is represented by decibels db( ⁇ )
- the degree of distortion represents the relative loudness of the audio on a logarithmic scale
- T represents Threshold on the energy magnitude of an adversarial perturbation.
- performing spectrogram feature extraction on the speech training samples to obtain positive sample spectrograms and negative sample spectrograms respectively includes: dividing each of the speech training samples into A plurality of small speech fragments; the plurality of small speech fragments are loaded with a truncated window function to obtain a plurality of windowed speech fragments; the plurality of windowed speech fragments are respectively short-time Fourier transformed to obtain each A plurality of spectrograms corresponding to the speech training samples.
- the truncated window function is a Hanning window function; the loading of the plurality of small speech segments into the truncated window function to obtain a plurality of windowed small speech segments includes: Load the Hanning window function on the above-mentioned small voice fragments to obtain multiple windowed small voice fragments.
- the input of the positive sample spectrogram and the negative sample spectrogram into the preset neural network for training to obtain the speech adversarial sample detection model includes: The positive sample spectrogram and the negative sample spectrogram are respectively input into the preset neural network for training, and a prediction result is obtained; according to the label of the positive sample spectrogram, the label of the negative sample spectrogram, and the prediction Calculate the energy loss function as a result; adjust the parameters of the preset neural network according to the energy loss function, and re-input the positive sample spectrogram and the negative sample spectrogram into the preset neural network respectively, and calculate the energy loss function, and adjust the parameters of the preset neural network until the energy loss function converges or reaches a preset threshold to obtain a speech adversarial sample detection model.
- the energy loss function is:
- E ⁇ (Y, x) -Y ⁇ F ⁇ (x);
- ⁇ is the parameter of the preset neural network, Y is the label of the speech training sample; x is the speech training sample; ⁇ is normal number.
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively, and the positive sample spectrograms and negative sample spectrograms are obtained Figures are respectively input into the preset neural network for training to obtain the speech adversarial sample detection model, so as to form an automatic speech adversarial sample detection tool, and improve the identification efficiency of fictional speech under the premise of ensuring high judgment accuracy, which is ASR
- the security provides front-end guarantees to prevent adverse effects caused by deliberately manufacturing adversarial examples that tamper with semantics.
- An embodiment of the present invention provides a computer-readable storage medium, the storage medium stores at least one executable instruction, and when the executable instruction is run on a computing device, the computing device executes the method described in any of the above method embodiments.
- a training method for a speech adversarial sample detection model or a detection method for a speech adversarial sample is described in any of the above method embodiments.
- the executable instructions can be used to cause the computing device to perform the following operations:
- the voice training samples include a plurality of normal voice samples and a plurality of confrontation voice samples; the confrontation voice samples are negative samples of tampering semantics;
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively;
- executable instructions can also be used to cause the computing device to perform the following operations:
- the spectrogram to be detected is input into the speech adversarial sample detection model; the speech adversarial sample detection model is trained according to the above-mentioned training method;
- the acquiring speech training samples, the speech training samples including normal speech samples and adversarial speech samples includes: obtaining original normal speech samples; according to the original normal speech samples, generating against speech samples;
- the objective function is: min ⁇ 2 +l(x'+ ⁇ ,t)stdb( ⁇ ) ⁇ T
- ⁇ represents the adversarial perturbation
- x′ is the original normal speech sample
- t is the target sentence
- l is the CTC loss
- the degree of distortion is represented by decibels db( ⁇ )
- the degree of distortion represents the relative loudness of the audio on a logarithmic scale
- T represents Threshold on the energy magnitude of an adversarial perturbation.
- performing spectrogram feature extraction on the speech training samples to obtain positive sample spectrograms and negative sample spectrograms respectively includes: dividing each of the speech training samples into A plurality of small speech fragments; the plurality of small speech fragments are loaded with a truncated window function to obtain a plurality of windowed speech fragments; the plurality of windowed speech fragments are respectively short-time Fourier transformed to obtain each A plurality of spectrograms corresponding to the speech training samples.
- the truncated window function is a Hanning window function; the loading of the plurality of small speech segments into the truncated window function to obtain a plurality of windowed small speech segments includes: Load the Hanning window function on the above-mentioned small voice fragments to obtain multiple windowed small voice fragments.
- the input of the positive sample spectrogram and the negative sample spectrogram into the preset neural network for training to obtain the speech adversarial sample detection model includes: The positive sample spectrogram and the negative sample spectrogram are respectively input into the preset neural network for training, and a prediction result is obtained; according to the label of the positive sample spectrogram, the label of the negative sample spectrogram, and the prediction Calculate the energy loss function as a result; adjust the parameters of the preset neural network according to the energy loss function, and re-input the positive sample spectrogram and the negative sample spectrogram into the preset neural network respectively, and calculate the energy loss function, and adjust the parameters of the preset neural network until the energy loss function converges or reaches a preset threshold to obtain a speech adversarial sample detection model.
- the energy loss function is:
- E ⁇ (Y, x) -Y ⁇ F ⁇ (x);
- ⁇ is the parameter of the preset neural network, Y is the label of the speech training sample; x is the speech training sample; ⁇ is normal number.
- the speech training samples are subjected to spectrogram feature extraction to obtain positive sample spectrograms and negative sample spectrograms respectively, and the positive sample spectrograms and negative sample spectrograms are obtained Figures are respectively input into the preset neural network for training to obtain the speech adversarial sample detection model, so as to form an automatic speech adversarial sample detection tool, and improve the identification efficiency of fictional speech under the premise of ensuring high judgment accuracy, which is ASR
- the security provides front-end guarantees to prevent adverse effects caused by intentionally manufacturing adversarial examples that tamper with semantics.
- An embodiment of the present invention provides a training device for a speech adversarial sample detection model, which is used to implement the above-mentioned training method for a speech adversarial sample detection model.
- An embodiment of the present invention provides a speech adversarial sample detection device, which is used to implement the above-mentioned speech adversarial sample detection method
- An embodiment of the present invention provides a computer program that can be called by a processor to enable a computing device to execute the method for training a speech adversarial sample detection model in any of the above method embodiments.
- An embodiment of the present invention provides a computer program, and the computer program can be invoked by a processor to enable a computing device to execute the method for detecting speech adversarial samples in any of the above method embodiments.
- An embodiment of the present invention provides a computer program product.
- the computer program product includes a computer program stored on a computer-readable storage medium.
- the computer program includes program instructions.
- An embodiment of the present invention provides a computer program product.
- the computer program product includes a computer program stored on a computer-readable storage medium.
- the computer program includes program instructions.
- modules in the device in the embodiment can be adaptively changed and arranged in one or more devices different from the embodiment.
- the modules or units or components in the embodiments can be combined into one module or unit or component, and they can be divided into a plurality of sub-modules or sub-units or sub-components. All features disclosed in this specification (including accompanying claims, abstract and drawings) and any method or method so disclosed may be used in any combination, except that at least some of such features and/or processes or units are mutually exclusive. All processes or units of equipment are combined.
- Each feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise.
Landscapes
- Engineering & Computer Science (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Measurement Of Mechanical Vibrations Or Ultrasonic Waves (AREA)
- Machine Translation (AREA)
Abstract
一种语音对抗样本检测模型的训练方法、装置、设备及计算机可读存储介质,所述方法包括:获取语音训练样本,语音训练样本包括多个正常语音样本及多个对抗语音样本(110);将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图(120);将所述正样本语谱图和负样本语谱图分别输入预设神经网络进行训练,得到语音对抗样本检测模型(130)。所述方法实现了对语音对抗样本的准确检测。
Description
本发明实施例涉及人工智能技术领域,具体涉及一种语音对抗样本检测方法、语音对抗样本检测模型的训练方法、装置、设备及计算机可读存储介质。
语音识别系统的目标是将某段语音翻译成文字,即一个语音转文字(speech-to-text)的转译过程。近年来,基于深度学习的端对端语音识别系统在市场上逐渐普及。然而,对抗样本技术的出现使得端对端ASR出现了安全问题。目前的对抗机器学习技术可以生成语音的对抗样本,通过对音频加入精心制作的扰动,有意识地使得ASR发生“转译错误”,但人耳却无法察觉。然而,本申请的发明人在实施本发明的过程中发现,现有对抗样本技术,可随便按篡改者的思路,几乎无限制地对识别系统的识别结果进行更改。在对一段正常语音添加细微的“对抗扰动噪音”之后,原来的转译结果从“我今天出去玩”变成了“我今天待在家”。这种转译错误可以让某些关键词在转译时变成”特定的文字”,对于别有用心的人,他们可让转译结果随心所欲地变成他们想要的结果。除了可以让关键词在转译中发生错误,还可在转译中,把整句话的意思完全换成另一个意思。更重要的是,这种被篡改的语音样本在听感上难以察觉。如果被带有目的性的篡改者使用,可能造成各种严重后果,譬如,利用对抗样本可以对微信的“语音锁”进行解锁从而获得他人的微信使用权。因此,对语音对抗样本进行检测,成为了一个亟待解决的重要问题。
发明内容
鉴于上述问题,本发明实施例提供了一种语音对抗样本检测方法、语音对抗样本检测模型的训练方法、装置、设备及计算机可读存储介质,用于解决现有技术中存在的对抗语音数据难以识别的技术问题。
根据本发明实施例的一个方面,提供了一种语音对抗样本检测模型的训练 方法,所述方法包括:
获取语音训练样本,所述语音训练样本包括多个正常语音样本及多个对抗语音样本;所述对抗语音样本为篡改语义的负样本;
将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图;
将所述正样本语谱图和负样本语谱图分别输入预设神经网络进行训练,得到语音对抗样本检测模型。
在一种可选的方式中,所述获取语音训练样本,所述语音训练样本包括正常语音样本及对抗语音样本,包括:获取原始正常语音样本;根据所述原始正常语音样本,通过目标函数生成对抗语音样本;
所述目标函数为:min‖δ‖
2+l(x′+δ,t)s.t.db(δ)≤T
其中,δ表示对抗性扰动,x′为原始正常语音样本,t为目标句子,l为CTC损失,通过分贝db(·)表示失真度,失真度表示对数尺度上音频的相对响度,T表示对抗性扰动的能量大小的阈值。
在一种可选的方式中,所述将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,包括:将每个所述语音训练样本切分成多个语音小片段;将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段;分别对所述多个加窗语音小片段进行短时傅里叶变换,得到每个所述语音训练样本对应的多个语谱图。
在一种可选的方式中,所述截断窗函数为汉宁窗函数;所述将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段,包括:所述将所述多个语音小片段加载汉宁窗函数,得到多个加窗语音小片段。
在一种可选的方式中,所述将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,包括:将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,出预测结果;根据所述正样本语谱图的标签、所述负样本语谱图的标签以及所述预测结果计算能量损失函数;根据所述能量损失函数调整所述预设神经网络的参数,并重新将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络,计算能量损失函数,调整所述预设神经网络的参数,直至所述能量损失函数收敛或 达到预设阈值时,得到语音对抗样本检测模型。
在一种可选的方式中,所述能量损失函数为:
其中,E
θ(Y,x)=-Y·F
θ(x);θ为所述预设神经网络的参数,Y为所述语音训练样本的标签;x为所述语音训练样本;β为正常数。
根据本发明实施例的另一方面,提供了一种语音对抗样本检测方法,包括:
获取待检测语音数据;
将所述待检测语音数据转换为待检测语谱图;
将所述待检测语谱图输入语音对抗样本检测模型;所述语音对抗样本检测模型根据所述的训练方法训练得到;
输出所述待检测语音数据的检测结果。
根据本发明实施例的另一方面,提供了一种语音对抗样本检测模型的训练装置,包括:
第一获取模块,用于获取语音训练样本,所述语音训练样本包括正常语音样本及对抗语音样本;所述对抗语音样本为篡改语义的负样本;
提取模块,用于将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图;
训练模块,用于将所述正样本语谱图和负样本语谱图分别输入预设神经网络进行训练,得到语音对抗样本检测模型。
根据本发明实施例的另一方面,提供了一种语音对抗样本检测装置,包括:
第二获取模块,用于获取待检测语音数据;
转换模块,用于将所述待检测语音数据转换为待检测语谱图;
检测模块,用于将所述待检测语谱图输入语音对抗样本检测模型;所述语音对抗样本检测模型根据所述语音对抗样本检测模型的训练方法或所述的语音对抗样本检测模型的训练装置训练得到;
输出模块,用于输出所述待检测语音数据的检测结果。
根据本发明实施例的另一方面,提供了一种计算设备,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行所述的语音对抗样本检测模型的训练方法或所述的语音对抗样本的检测方法的操作。
根据本发明实施例的又一方面,提供了一种计算机可读存储介质,所述存储介质中存储有至少一可执行指令,所述可执行指令在计算设备上运行时,使得计算设备执行所述的语音对抗样本检测模型的训练方法或所述的语音对抗样本的检测方法的操作。
本发明实施例通过获取语音训练样本,将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,从而能够形成自动化语音对抗样本检测工具,在保证较高判断准确性的前提下提升虚构语音的鉴定效率,为ASR的安全性提供前端保障,预防因故意制造篡改语义的对抗样本产生的不良影响。
上述说明仅是本发明实施例技术方案的概述,为了能够更清楚了解本发明实施例的技术手段,而可依照说明书的内容予以实施,并且为了让本发明实施例的上述和其它目的、特征和优点能够更明显易懂,以下特举本发明的具体实施方式。
附图仅用于示出实施方式,而并不认为是对本发明的限制。而且在整个附图中,用相同的参考符号表示相同的部件。在附图中:
图1示出了本发明实施例提供的添加“对抗噪音扰动”导致语音识别错误的示意图;
图2示出了本发明实施例提供的语音对抗样本检测模型的训练方法的流程示意图;
图3本发明实施例提供的语音对抗样本的检测方法的流程示意图;
图4示出了本发明实施例提供的语音对抗样本检测模型的训练装置的结构 示意图;
图5示出了本发明实施例提供的语音对抗样本的检测装置的结构示意图;
图6示出了本发明实施例提供的计算设备的结构示意图。
下面将参照附图更详细地描述本发明的示例性实施例。虽然附图中显示了本发明的示例性实施例,然而应当理解,可以以各种形式实现本发明而不应被这里阐述的实施例所限制。
下面对本发明实施例中出现的技术术语进行相关解释:
ASR:Automatic Speech Recognition,自动语音识别系统。
波形信号:语音的时域波形信号。
对抗扰动(Adversarial perturbation):对抗扰动是添加到干净语音中的噪声,使其成为一个语音对抗样本。
对抗样本:通过对样本加入人类难以察觉的扰动从而欺骗神经网络的样本。
短时傅立叶变换(STFT):选择一个时频局部化的窗函数,通过窗函数在时间轴上的移动,对信号进行逐段分析得到信号的一组局部“频谱”。
语谱图:spectrogram,语音频谱分析视图。
多谱图:语音信号的多个STFT语谱图。
CNN:卷积神经网络。
EBM:energy-based model,基于能量函数的模型。
首先,对本发明实施例的主要思想进行阐述。如图1所示,左边的频谱为正常语音,其识别结果为“我今天出去玩”;中间的频谱为对抗噪音扰动;右边的频谱图为叠加了左图正常语音频谱及中间对抗噪音扰动的频谱图,即篡改语义的语音对抗样本,其右图的频谱图语音识别结果为“我今天待在家”。从频谱图上可以看出,左图和右图的频谱十分相近,而语义则完全不同。篡改语义的语音对抗样本的生成方法大致分为白盒攻击和黑盒攻击。白盒攻击假定可以访问模型参数,针对性地施加梯度相关的数据修正,来达到修改原始数据的 目的;在黑盒攻击中,攻击方无法访问模型内部信息,通常使用试探的方式加入噪音,并不断调整所加入的噪音,以达到修改原始数据的目的。本申请的发明人通过分析发现:相较于正常语音,语义被篡改的语音所获得的STFT语谱图,在不同频带上的一致性估计(coherence estimate)以及交叉谱相位(cross-spectrum phase)呈现出以下规律:越高频段的部分,一致性越低,并且其交叉谱相位变化更大。因此,本发明实施例依据正常语音样本与语音对抗样本在STFT语谱图的特性差异,使用STFT语谱图作为特征输入到卷积神经网络,并使用基于能量的模型进行分类,从而可以准确的识别对抗样本。
图2示出了本发明实施例提供的语音对抗样本检测模型的训练方法的流程图,该方法由计算设备执行。该计算设备可以是计算机设备,如个人计算机、台式计算机、平板电脑等;还可以是其它人工智能设备或终端,如机器人、手机等,本发明实施例不做具体限定。如图2所示,该方法包括以下步骤:
步骤110:获取语音训练样本,所述语音训练样本包括多个正常语音样本及多个对抗语音样本;所述对抗语音样本为篡改语义的负样本。
其中,对抗语音样本为篡改语义的负样本,指的是添加了对抗噪音扰动从而篡改了语义的语音。本发明实施例中,篡改语义的语音对抗样本可通过白盒攻击或黑盒攻击的方式生成。白盒攻击(如Taori Attack,Taori等人于2019提出的一种对抗样本攻击算法)假定可以访问模型参数,针对性地施加梯度相关的数据修正,来达到修改原始数据的目的。在黑盒攻击(例如C&W Attack,C&W Attack为Carlini和Wagner提出的对抗样本攻击算法)中,攻击方无法访问模型内部信息,使用试探的方式加入噪音,并不断调整所加入的噪音,以达到修改原始数据的目的。
本发明实施例中,通过C&W Attack的方法生成语音对抗样本的具体过程可以是:获取原始正常语音样本;根据所述原始正常语音样本,通过目标函数生成对抗语音样本;
所述目标函数为:
min‖δ‖
2+l(x′+δ,t)
s.t.db(δ)≤T
其中,
δ表示对抗性扰动,x′为原始正常语音样本,
表 示语音对抗样本,t为目标句子,l为CTC损失,通过分贝db(·)表示失真度,失真度表示对数尺度上音频的相对响度,‖·‖
2表示二范数;s.t.表示约束条件,T表示对抗性扰动的能量大小的阈值,本发明实施例中该阈值可以依据具体场景进行相应设置。本发明实施例中,目标句子指的是与语音对抗样本对应的被篡改语义的句子。
其中,可以根据上述设定的目标函数,通过梯度下降法得到语音对抗样本。
步骤120:将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图。
其中,由于基于B-RNN网络梯度产生的对抗性扰动是以语音时间序列数据分布的,因此,本发明实施例将所述语音训练样本分割成多个短时帧,该短时帧的长度可以是几百毫秒。具体地,将每个所述语音训练样本切分成多个语音小片段;将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段;分别对所述多个加窗语音小片段进行短时傅里叶变换,得到每个所述语音训练样本对应的多个语谱图,从而将语音训练样本转换为频域。
其中,该转换过程为:
其中,P
t(w,m)为每一个语音小片段对应的语谱图,w[m]是长度为L的窗口序列,m=0、1、...、L,N是变换中使用的DFT点数。本发明实施例中,截断窗函数为汉宁窗函数。其中,汉宁窗函数的汉宁窗口可以设置为L=512,语音样本信号x
t划过窗口,从而通过STFT将一个变量的x
t映射为两个变量的函数,即时域中的w和频域中的m。
步骤130:将所述正样本语谱图和负样本语谱图分别输入预设神经网络进行训练,得到语音对抗样本检测模型。
其中,在得到正样本语谱图及负样本语谱图后,为每一个正样本语谱图及负样本语谱图添加标签,其中正样本语谱图的标签可设置为1,负样本语谱图的标签为-1。将该带标签的正样本语谱图及负样本语谱图分别输入预设神经网络进行迭代训练,从而得到语音对抗样本检测模型。
本发明实施例中,预设神经网络为卷积神经网络,该卷积神经网络包括卷 积层、下采样层、全连接层及输出层。本发明的一个实施例中,具体可以是:3个卷积层与2个下采样层交替组合,再连接3个全连接层,最后的输出层为一个结点,该输出层的结点值表示模型的输出值。在设定好该卷积神经网络后,将该带标签的正样本语谱图及负样本语谱图分别输入该卷积神经网络进行迭代训练,从而得到语音对抗样本检测模型。
本发明实施例中,可通过能量损失函数对该卷积神经网络进行迭代训练,具体为:
将带标签的所述正样本语谱图和带标签的负样本语谱图分别输入所述预设神经网络进行训练,出预测结果。对于每一个模型的语音训练样本x,设其标签为Y∈(1,-1)。首先将其输入模型,得到模型输出值,记作F
θ(x)。其中,该语音对抗样本检测模型的输出F
θ(x)大于0时,判断为正样本语谱图,F
θ(x)小于0时,判断为负样本语谱图。
根据所述正样本语谱图的标签、所述负样本语谱图的标签以及所述预测结果计算能量损失函数。
其中,计算能量损失函数的过程为:
首先,根据所述正样本语谱图的标签、所述负样本语谱图的标签以及所述预测结果计算能量值:
E
θ(Y,x)=-Y·F
θ(x);
再计算能量损失函数:
其中,θ为所述预设神经网络的参数,Y为所述语音训练样本的标签;x为所述语音训练样本;β为正常数,本发明实施例中,β=0.5。
根据得到的能量损失函数调整所述预设神经网络的参数,并重新将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络,计算能量损失函数,调整所述预设神经网络的参数,进行迭代训练,直至所述能量损失函数收敛或达到预设阈值时,得到最优参数,从而得到最终的语音对抗样本检测模型。本发明实施例中,在根据得到的能量损失函数调整所述预设神经网络的参数时,可通过梯度下降法对参数θ进行调整。
本发明实施例中,由于一个语音训练样本被切分为了多个语音小片段,每个语音训练样本对应多个语谱图,在得到最终的语音对抗样本检测模型后,计 算每一个语音训练样本为正样本或负样本的概率,从而确定该语音训练样本为正常语音还是语音对抗样本。具体地,可通过softmax函数来计算概率:通过语音对抗样本检测模型输出每个小片段的语谱图为正样本还是负样本的输出结果,并通过softmax函数统计语谱图为正样本和负样本的总数量,如果语音训练样本的N个语音小片段中,更多的小片段被判断为正样本,则认为语音样本为正常语音样本;反之,则认为语音片段为语音对抗样本。
本发明实施例通过获取语音训练样本,将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,从而能够形成自动化语音对抗样本检测工具,在保证较高判断准确性的前提下提升虚构语音的鉴定效率,为ASR的安全性提供前端保障,预防因故意制造篡改语义的对抗样本产生的不良影响。
图3示出了本发明另一实施例提供的语音对抗样本的检测方法的流程图,该方法由计算设备执行。该计算设备可以是计算机设备,如个人计算机、台式计算机、平板电脑等;还可以是其它人工智能设备或终端,如机器人、手机等,本发明实施例不做具体限定。如图3所示,该方法包括以下步骤:
步骤210:获取待检测语音数据。
步骤220:将所述待检测语音数据转换为待检测语谱图。
本发明实施例中,在得到待检测语音后,需要按照预设的规则将待检测语音数据切分成多个待检测语音小片段;将所述待检测多个语音小片段加载截断窗函数,得到多个待检测加窗语音小片段;分别对所述多个待检测加窗语音小片段进行短时傅里叶变换,得到待检测语音数据对应的多个待检测语谱图。该过程与语音对抗样本检测模型的训练过程中转换为语谱图的过程大体一致,此处不再赘述。
步骤230:将所述待检测语谱图输入语音对抗样本检测模型;所述语音对抗样本检测模型根据所述的训练方法训练得到。
本发明实施例中,将多个待检测语谱图输入对抗检测模型中之后,得到各个待检测语谱图的输出结果,通过softmax函数进行计算,得到该待检测语音数据的检测结果。
步骤240:输出所述待检测语音数据的检测结果。
其中,该检测结果为表征该待检测语音数据是正常语音还是篡改语义的对抗语音,将该检测结果输出给用户,以使得用户可以根据检测结果进行相应的操作。
本发明实施例通过获取语音训练样本,将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,从而能够形成自动化语音对抗样本检测工具,在保证较高判断准确性的前提下提升虚构语音的鉴定效率,为ASR的安全性提供前端保障,预防因故意制造篡改语义的对抗样本产生的不良影响。
图4示出了本发明实施例提供的语音对抗样本检测模型的训练装置的结构示意图。如图4所示,该装置300包括:第一获取模块310、提取模块320和训练模块330。
第一获取模块310,用于获取语音训练样本,所述语音训练样本包括正常语音样本及对抗语音样本;所述对抗语音样本为篡改语义的负样本。
提取模块320,用于将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图。
训练模块330,用于将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型。
本发明实施例的语音对抗样本检测模型的训练装置的具体工作过程与上述语音对抗样本检测模型的训练方法实施例的方法步骤大体一致,此处不再赘述。
本发明实施例通过获取语音训练样本,将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,从而能够形成自动化语音对抗样本检测工具,在保证较高判断准确性的前提下提升虚构语音的鉴定效率,为ASR的安全性提供前端保障,预防因故意制造篡改语义的对抗样本产生的不良影响。
图5示出了本发明实施例提供的语音对抗样本的检测装置的结构示意图。如图5所示,该装置400包括:
第二获取模块410,用于获取待检测语音数据。
转换模块420,用于将所述待检测语音数据转换为待检测语谱图。
检测模块430,用于将所述待检测语谱图输入语音对抗样本检测模型。所述语音对抗样本检测模型根据如上述语音对抗样本检测模型的训练方法或如上述的语音对抗样本检测模型的训练装置训练得到。
输出模块440,用于输出所述待检测语音数据的检测结果。
本发明实施例的语音对抗样本的检测装置的具体工作过程与上述方法实施例的语音对抗样本的检测方法步骤大体一致,此处不再赘述。
本发明实施例通过获取语音训练样本,将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,从而能够形成自动化语音对抗样本检测工具,在保证较高判断准确性的前提下提升虚构语音的鉴定效率,为ASR的安全性提供前端保障,预防因故意制造篡改语义的对抗样本产生的不良影响。
图6示出了本发明实施例提供的计算设备的结构示意图,本发明具体实施例并不对计算设备的具体实现做限定。如图6所示,该计算设备可以包括:处理器(processor)502、通信接口(Communications Interface)504、存储器(memory)506、以及通信总线508。
其中:处理器502、通信接口504、以及存储器506通过通信总线508完成相互间的通信。通信接口504,用于与其它设备比如客户端或其它服务器等的网元通信。处理器502,用于执行程序510,具体可以执行上述用于语音对抗样本检测模型的训练方法或语音对抗样本的检测方法实施例中的相关步骤。
具体地,程序510可以包括程序代码,该程序代码包括计算机可执行指令。处理器502可能是中央处理器CPU,或者是特定集成电路ASIC(Application Specific Integrated Circuit),或者是被配置成实施本发明实施例的一个或多个集成电路。计算设备包括的一个或多个处理器,可以是同一类型的处理器,如一个或多个CPU;也可以是不同类型的处理器,如一个或多个CPU以及一个或多个ASIC。存储器506,用于存放程序510。存储器506可能包含高速RAM存储器,也可能还包括非易失性存储器(non-volatile memory),例如至少一个磁 盘存储器。
程序510具体可以被处理器502调用使计算设备执行以下操作:
获取语音训练样本,所述语音训练样本包括多个正常语音样本及多个对抗语音样本;所述对抗语音样本为篡改语义的负样本;
将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图;
将所述正样本语谱图和负样本语谱图分别输入预设神经网络进行训练,得到语音对抗样本检测模型;或者
程序510具体可以被处理器502调用使计算设备执行以下操作:
获取待检测语音数据;
将所述待检测语音数据转换为待检测语谱图;
将所述待检测语谱图输入语音对抗样本检测模型;所述语音对抗样本检测模型根据上述的训练方法训练得到;
输出所述待检测语音数据的检测结果。
在一种可选的方式中,所述获取语音训练样本,所述语音训练样本包括正常语音样本及对抗语音样本,包括:获取原始正常语音样本;根据所述原始正常语音样本,通过目标函数生成对抗语音样本;
所述目标函数为:min‖δ‖
2+l(x′+δ,t)s.t.db(δ)≤T
其中,δ表示对抗性扰动,x′为原始正常语音样本,t为目标句子,l为CTC损失,通过分贝db(·)表示失真度,失真度表示对数尺度上音频的相对响度,T表示对抗性扰动的能量大小的阈值。
在一种可选的方式中,所述将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,包括:将每个所述语音训练样本切分成多个语音小片段;将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段;分别对所述多个加窗语音小片段进行短时傅里叶变换,得到每个所述语音训练样本对应的多个语谱图。
在一种可选的方式中,所述截断窗函数为汉宁窗函数;所述将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段,包括:所述将所述多个 语音小片段加载汉宁窗函数,得到多个加窗语音小片段。
在一种可选的方式中,所述将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,包括:将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,出预测结果;根据所述正样本语谱图的标签、所述负样本语谱图的标签以及所述预测结果计算能量损失函数;根据所述能量损失函数调整所述预设神经网络的参数,并重新将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络,计算能量损失函数,调整所述预设神经网络的参数,直至所述能量损失函数收敛或达到预设阈值时,得到语音对抗样本检测模型。
在一种可选的方式中,所述能量损失函数为:
其中,E
θ(Y,x)=-Y·F
θ(x);θ为所述预设神经网络的参数,Y为所述语音训练样本的标签;x为所述语音训练样本;β为正常数。
本发明实施例的计算设备的具体工作过程与上述方法实施例的方法步骤大体一致,此处不再赘述。
本发明实施例通过获取语音训练样本,将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,从而能够形成自动化语音对抗样本检测工具,在保证较高判断准确性的前提下提升虚构语音的鉴定效率,为ASR的安全性提供前端保障,预防因故意制造篡改语义的对抗样本产生的不良影响。
本发明实施例提供了一种计算机可读存储介质,所述存储介质存储有至少一可执行指令,该可执行指令在计算设备上运行时,使得所述计算设备执行上述任意方法实施例中的语音对抗样本检测模型的训练方法或语音对抗样本的检测方法。
可执行指令具体可以用于使得计算设备执行以下操作:
获取语音训练样本,所述语音训练样本包括多个正常语音样本及多个对抗语音样本;所述对抗语音样本为篡改语义的负样本;
将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图;
将所述正样本语谱图和负样本语谱图分别输入预设神经网络进行训练,得到语音对抗样本检测模型;或者
可执行指令具体还可以用于使得计算设备执行以下操作:
获取待检测语音数据;
将所述待检测语音数据转换为待检测语谱图;
将所述待检测语谱图输入语音对抗样本检测模型;所述语音对抗样本检测模型根据上述的训练方法训练得到;
输出所述待检测语音数据的检测结果。
在一种可选的方式中,所述获取语音训练样本,所述语音训练样本包括正常语音样本及对抗语音样本,包括:获取原始正常语音样本;根据所述原始正常语音样本,通过目标函数生成对抗语音样本;
所述目标函数为:min‖δ‖
2+l(x′+δ,t)s.t.db(δ)≤T
其中,δ表示对抗性扰动,x′为原始正常语音样本,t为目标句子,l为CTC损失,通过分贝db(·)表示失真度,失真度表示对数尺度上音频的相对响度,T表示对抗性扰动的能量大小的阈值。
在一种可选的方式中,所述将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,包括:将每个所述语音训练样本切分成多个语音小片段;将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段;分别对所述多个加窗语音小片段进行短时傅里叶变换,得到每个所述语音训练样本对应的多个语谱图。
在一种可选的方式中,所述截断窗函数为汉宁窗函数;所述将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段,包括:所述将所述多个语音小片段加载汉宁窗函数,得到多个加窗语音小片段。
在一种可选的方式中,所述将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,包括:将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,出预测结 果;根据所述正样本语谱图的标签、所述负样本语谱图的标签以及所述预测结果计算能量损失函数;根据所述能量损失函数调整所述预设神经网络的参数,并重新将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络,计算能量损失函数,调整所述预设神经网络的参数,直至所述能量损失函数收敛或达到预设阈值时,得到语音对抗样本检测模型。
在一种可选的方式中,所述能量损失函数为:
其中,E
θ(Y,x)=-Y·F
θ(x);θ为所述预设神经网络的参数,Y为所述语音训练样本的标签;x为所述语音训练样本;β为正常数。
本发明实施例通过获取语音训练样本,将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,从而能够形成自动化语音对抗样本检测工具,在保证较高判断准确性的前提下提升虚构语音的鉴定效率,为ASR的安全性提供前端保障,预防因故意制造篡改语义的对抗样本产生的不良影响。
本发明实施例提供一种语音对抗样本检测模型的训练装置,用于执行上述语音对抗样本检测模型的训练方法。
本发明实施例提供一种语音对抗样本的检测装置,用于执行上述语音对抗样本的检测方法
本发明实施例提供了一种计算机程序,所述计算机程序可被处理器调用使计算设备执行上述任意方法实施例中的语音对抗样本检测模型的训练方法。
本发明实施例提供了一种计算机程序,所述计算机程序可被处理器调用使计算设备执行上述任意方法实施例中的语音对抗样本的检测方法。
本发明实施例提供了一种计算机程序产品,计算机程序产品包括存储在计算机可读存储介质上的计算机程序,计算机程序包括程序指令,当程序指令在计算机上运行时,使得所述计算机执行上述任意方法实施例中的语音对抗样本检测模型的训练方法。
本发明实施例提供了一种计算机程序产品,计算机程序产品包括存储在计 算机可读存储介质上的计算机程序,计算机程序包括程序指令,当程序指令在计算机上运行时,使得所述计算机执行上述任意方法实施例中的语音对抗样本的检测方法。
在此提供的算法或显示不与任何特定计算机、虚拟系统或者其它设备固有相关。各种通用系统也可以与基于在此的示教一起使用。根据上面的描述,构造这类系统所要求的结构是显而易见的。此外,本发明实施例也不针对任何特定编程语言。应当明白,可以利用各种编程语言实现在此描述的本发明的内容,并且上面对特定语言所做的描述是为了披露本发明的最佳实施方式。
在此处所提供的说明书中,说明了大量具体细节。然而,能够理解,本发明的实施例可以在没有这些具体细节的情况下实践。在一些实例中,并未详细示出公知的方法、结构和技术,以便不模糊对本说明书的理解。
类似地,应当理解,为了精简本发明并帮助理解各个发明方面中的一个或多个,在上面对本发明的示例性实施例的描述中,本发明实施例的各个特征有时被一起分组到单个实施例、图、或者对其的描述中。然而,并不应将该公开的方法解释成反映如下意图:即所要求保护的本发明要求比在每个权利要求中所明确记载的特征更多的特征。
本领域技术人员可以理解,可以对实施例中的设备中的模块进行自适应性地改变并且把它们设置在与该实施例不同的一个或多个设备中。可以把实施例中的模块或单元或组件组合成一个模块或单元或组件,以及可以把它们分成多个子模块或子单元或子组件。除了这样的特征和/或过程或者单元中的至少一些是相互排斥之外,可以采用任何组合对本说明书(包括伴随的权利要求、摘要和附图)中公开的所有特征以及如此公开的任何方法或者设备的所有过程或单元进行组合。除非另外明确陈述,本说明书(包括伴随的权利要求、摘要和附图)中公开的每个特征可以由提供相同、等同或相似目的的替代特征来代替。
应该注意的是上述实施例对本发明进行说明而不是对本发明进行限制,并且本领域技术人员在不脱离所附权利要求的范围的情况下可设计出替换实施例。在权利要求中,不应将位于括号之间的任何参考符号构造成对权利要求的限制。单词“包含”不排除存在未列在权利要求中的元件或步骤。位于元件之前的单词“一”或“一个”不排除存在多个这样的元件。本发明可以借助于包括有若干不同元件的硬件以及借助于适当编程的计算机来实现。在列举了若干装置的单元权 利要求中,这些装置中的若干个可以是通过同一个硬件项来具体体现。单词第一、第二、以及第三等的使用不表示任何顺序。可将这些单词解释为名称。上述实施例中的步骤,除有特殊说明外,不应理解为对执行顺序的限定。
Claims (11)
- 一种语音对抗样本检测模型的训练方法,其特征在于,所述方法包括:获取语音训练样本,所述语音训练样本包括多个正常语音样本及多个对抗语音样本;所述对抗语音样本为篡改语义的负样本;将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图;将所述正样本语谱图和负样本语谱图分别输入预设神经网络进行训练,得到语音对抗样本检测模型。
- 根据权利要求1所述的方法,其特征在于,所述获取语音训练样本,所述语音训练样本包括正常语音样本及对抗语音样本,包括:获取原始正常语音样本;根据所述原始正常语音样本,通过目标函数生成对抗语音样本;所述目标函数为:min‖δ‖ 2+l(x′+δ,t)s.t.db(δ)≤T其中,δ表示对抗性扰动,x′为原始正常语音样本,t为目标句子,l为CTC损失,通过分贝db(·)表示失真度,失真度表示对数尺度上音频的相对响度,T表示对抗性扰动的能量大小的阈值。
- 根据权利要求1所述的方法,其特征在于,所述将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图,包括:将每个所述语音训练样本切分成多个语音小片段;将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段;分别对所述多个加窗语音小片段进行短时傅里叶变换,得到每个所述语音训练样本对应的多个语谱图。
- 根据权利要求3所述的方法,其特征在于,所述截断窗函数为汉宁窗函数;所述将所述多个语音小片段加载截断窗函数,得到多个加窗语音小片段,包括:所述将所述多个语音小片段加载汉宁窗函数,得到多个加窗语音小片段。
- 根据权利要求1-4任一项所述的方法,其特征在于,所述将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,得到所述语音对抗样本检测模型,包括:将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络进行训练,输出预测结果;根据所述正样本语谱图的标签、所述负样本语谱图的标签以及所述预测结果计算能量损失函数;根据所述能量损失函数调整所述预设神经网络的参数,并重新将所述正样本语谱图和负样本语谱图分别输入所述预设神经网络,计算能量损失函数,调整所述预设神经网络的参数,直至所述能量损失函数收敛或达到预设阈值时,得到语音对抗样本检测模型。
- 一种语音对抗样本检测方法,其特征在于,所述方法包括:获取待检测语音数据;将所述待检测语音数据转换为待检测语谱图;将所述待检测语谱图输入语音对抗样本检测模型;所述语音对抗样本检测模型根据如权利要求1-6任一项所述的训练方法训练得到;输出所述待检测语音数据的检测结果。
- 一种语音对抗样本检测模型的训练装置,其特征在于,所述装置包括:第一获取模块,用于获取语音训练样本,所述语音训练样本包括正常语音样本及对抗语音样本;所述对抗语音样本为篡改语义的负样本;提取模块,用于将所述语音训练样本进行语谱图特征提取,分别得到正样本语谱图及负样本语谱图;训练模块,用于将所述正样本语谱图和负样本语谱图分别输入所述预设神 经网络进行训练,得到所述语音对抗样本检测模型。
- 一种语音对抗样本检测装置,其特征在于,所述装置包括:第二获取模块,用于获取待检测语音数据;转换模块,用于将所述待检测语音数据转换为待检测语谱图;检测模块,用于将所述待检测语谱图输入语音对抗样本检测模型;所述语音对抗样本检测模型根据如权利要求1-6任一项所述语音对抗样本检测模型的训练方法或如权利要求8所述的语音对抗样本检测模型的训练装置训练得到;输出模块,用于输出所述待检测语音数据的检测结果。
- 一种计算设备,其特征在于,包括:处理器、存储器、通信接口和通信总线,所述处理器、所述存储器和所述通信接口通过所述通信总线完成相互间的通信;所述存储器用于存放至少一可执行指令,所述可执行指令使所述处理器执行如权利要求1-6任意一项所述的语音对抗样本检测模型的训练方法或如权利要求7所述的语音对抗样本的检测方法的操作。
- 一种计算机可读存储介质,其特征在于,所述存储介质中存储有至少一可执行指令,所述可执行指令在计算设备上运行时,使得计算设备执行如权利要求1-6任意一项所述的语音对抗样本检测模型的训练方法或如权利要求7所述的语音对抗样本的检测方法的操作。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202180001899.7A CN113646833A (zh) | 2021-07-14 | 2021-07-14 | 语音对抗样本检测方法、装置、设备及计算机可读存储介质 |
PCT/CN2021/106236 WO2023283823A1 (zh) | 2021-07-14 | 2021-07-14 | 语音对抗样本检测方法、装置、设备及计算机可读存储介质 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/106236 WO2023283823A1 (zh) | 2021-07-14 | 2021-07-14 | 语音对抗样本检测方法、装置、设备及计算机可读存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023283823A1 true WO2023283823A1 (zh) | 2023-01-19 |
Family
ID=78427364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/106236 WO2023283823A1 (zh) | 2021-07-14 | 2021-07-14 | 语音对抗样本检测方法、装置、设备及计算机可读存储介质 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN113646833A (zh) |
WO (1) | WO2023283823A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117292717A (zh) * | 2023-11-27 | 2023-12-26 | 广东美的制冷设备有限公司 | 异音识别方法、装置、电子设备和存储介质 |
CN118155654A (zh) * | 2024-05-10 | 2024-06-07 | 腾讯科技(深圳)有限公司 | 模型训练方法、音频成分缺失识别方法、装置及电子设备 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11856024B2 (en) * | 2021-06-18 | 2023-12-26 | International Business Machines Corporation | Prohibiting voice attacks |
CN114049884B (zh) * | 2022-01-11 | 2022-05-13 | 广州小鹏汽车科技有限公司 | 语音交互方法、车辆、计算机可读存储介质 |
CN115035890B (zh) * | 2022-06-23 | 2023-12-05 | 北京百度网讯科技有限公司 | 语音识别模型的训练方法、装置、电子设备及存储介质 |
CN116758936B (zh) * | 2023-08-18 | 2023-11-07 | 腾讯科技(深圳)有限公司 | 音频指纹特征提取模型的处理方法、装置和计算机设备 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180254046A1 (en) * | 2017-03-03 | 2018-09-06 | Pindrop Security, Inc. | Method and apparatus for detecting spoofing conditions |
CN110444208A (zh) * | 2019-08-12 | 2019-11-12 | 浙江工业大学 | 一种基于梯度估计和ctc算法的语音识别攻击防御方法及装置 |
CN110718232A (zh) * | 2019-09-23 | 2020-01-21 | 东南大学 | 一种基于二维语谱图和条件生成对抗网络的语音增强方法 |
CN110797031A (zh) * | 2019-09-19 | 2020-02-14 | 厦门快商通科技股份有限公司 | 语音变音检测方法、系统、移动终端及存储介质 |
CN111048071A (zh) * | 2019-11-11 | 2020-04-21 | 北京海益同展信息科技有限公司 | 语音数据处理方法、装置、计算机设备和存储介质 |
CN111210807A (zh) * | 2020-02-21 | 2020-05-29 | 厦门快商通科技股份有限公司 | 语音识别模型训练方法、系统、移动终端及存储介质 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108198561A (zh) * | 2017-12-13 | 2018-06-22 | 宁波大学 | 一种基于卷积神经网络的翻录语音检测方法 |
CN108597496B (zh) * | 2018-05-07 | 2020-08-28 | 广州势必可赢网络科技有限公司 | 一种基于生成式对抗网络的语音生成方法及装置 |
US11222651B2 (en) * | 2019-06-14 | 2022-01-11 | Robert Bosch Gmbh | Automatic speech recognition system addressing perceptual-based adversarial audio attacks |
CN111710346B (zh) * | 2020-06-18 | 2021-07-27 | 腾讯科技(深圳)有限公司 | 音频处理方法、装置、计算机设备以及存储介质 |
CN111933180B (zh) * | 2020-06-28 | 2023-04-07 | 厦门快商通科技股份有限公司 | 音频拼接检测方法、系统、移动终端及存储介质 |
-
2021
- 2021-07-14 CN CN202180001899.7A patent/CN113646833A/zh active Pending
- 2021-07-14 WO PCT/CN2021/106236 patent/WO2023283823A1/zh active Application Filing
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20180254046A1 (en) * | 2017-03-03 | 2018-09-06 | Pindrop Security, Inc. | Method and apparatus for detecting spoofing conditions |
CN110444208A (zh) * | 2019-08-12 | 2019-11-12 | 浙江工业大学 | 一种基于梯度估计和ctc算法的语音识别攻击防御方法及装置 |
CN110797031A (zh) * | 2019-09-19 | 2020-02-14 | 厦门快商通科技股份有限公司 | 语音变音检测方法、系统、移动终端及存储介质 |
CN110718232A (zh) * | 2019-09-23 | 2020-01-21 | 东南大学 | 一种基于二维语谱图和条件生成对抗网络的语音增强方法 |
CN111048071A (zh) * | 2019-11-11 | 2020-04-21 | 北京海益同展信息科技有限公司 | 语音数据处理方法、装置、计算机设备和存储介质 |
CN111210807A (zh) * | 2020-02-21 | 2020-05-29 | 厦门快商通科技股份有限公司 | 语音识别模型训练方法、系统、移动终端及存储介质 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117292717A (zh) * | 2023-11-27 | 2023-12-26 | 广东美的制冷设备有限公司 | 异音识别方法、装置、电子设备和存储介质 |
CN117292717B (zh) * | 2023-11-27 | 2024-03-22 | 广东美的制冷设备有限公司 | 异音识别方法、装置、电子设备和存储介质 |
CN118155654A (zh) * | 2024-05-10 | 2024-06-07 | 腾讯科技(深圳)有限公司 | 模型训练方法、音频成分缺失识别方法、装置及电子设备 |
Also Published As
Publication number | Publication date |
---|---|
CN113646833A (zh) | 2021-11-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023283823A1 (zh) | 语音对抗样本检测方法、装置、设备及计算机可读存储介质 | |
US11996091B2 (en) | Mixed speech recognition method and apparatus, and computer-readable storage medium | |
WO2020177380A1 (zh) | 基于短文本的声纹检测方法、装置、设备及存储介质 | |
WO2020173133A1 (zh) | 情感识别模型的训练方法、情感识别方法、装置、设备及存储介质 | |
Boles et al. | Voice biometrics: Deep learning-based voiceprint authentication system | |
US10593336B2 (en) | Machine learning for authenticating voice | |
US10176811B2 (en) | Neural network-based voiceprint information extraction method and apparatus | |
WO2019232829A1 (zh) | 声纹识别方法、装置、计算机设备及存储介质 | |
WO2018223727A1 (zh) | 识别声纹的方法、装置、设备及介质 | |
WO2021000408A1 (zh) | 面试评分方法、装置、设备及存储介质 | |
CN112949708B (zh) | 情绪识别方法、装置、计算机设备和存储介质 | |
CN112259106A (zh) | 声纹识别方法、装置、存储介质及计算机设备 | |
CN108962231B (zh) | 一种语音分类方法、装置、服务器及存储介质 | |
WO2020034628A1 (zh) | 口音识别方法、装置、计算机装置及存储介质 | |
WO2022141868A1 (zh) | 一种提取语音特征的方法、装置、终端及存储介质 | |
CN109686383A (zh) | 一种语音分析方法、装置及存储介质 | |
CN109256138A (zh) | 身份验证方法、终端设备及计算机可读存储介质 | |
WO2019237518A1 (zh) | 模型库建立方法、语音识别方法、装置、设备及介质 | |
WO2021227259A1 (zh) | 重音检测方法及装置、非瞬时性存储介质 | |
WO2019232848A1 (zh) | 语音区分方法、装置、计算机设备及存储介质 | |
WO2020252935A1 (zh) | 声纹验证方法、装置、设备及存储介质 | |
CN114333865A (zh) | 一种模型训练以及音色转换方法、装置、设备及介质 | |
CN108847251B (zh) | 一种语音去重方法、装置、服务器及存储介质 | |
Akbal et al. | Development of novel automated language classification model using pyramid pattern technique with speech signals | |
Chakravarty et al. | A lightweight feature extraction technique for deepfake audio detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21949615 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21949615 Country of ref document: EP Kind code of ref document: A1 |