WO2020029882A1 - 一种方位角估计的方法、设备及存储介质 - Google Patents
一种方位角估计的方法、设备及存储介质 Download PDFInfo
- Publication number
- WO2020029882A1 WO2020029882A1 PCT/CN2019/099049 CN2019099049W WO2020029882A1 WO 2020029882 A1 WO2020029882 A1 WO 2020029882A1 CN 2019099049 W CN2019099049 W CN 2019099049W WO 2020029882 A1 WO2020029882 A1 WO 2020029882A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- wake
- word
- azimuth
- sampling signal
- terminal device
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 60
- 238000005070 sampling Methods 0.000 claims abstract description 124
- 238000001514 detection method Methods 0.000 claims abstract description 105
- 238000001228 spectrum Methods 0.000 claims abstract description 84
- 230000015654 memory Effects 0.000 claims description 23
- 239000000872 buffer Substances 0.000 claims description 17
- 230000037007 arousal Effects 0.000 claims description 15
- 230000008859 change Effects 0.000 claims description 7
- 230000003139 buffering effect Effects 0.000 claims description 4
- 239000000284 extract Substances 0.000 claims description 2
- 230000003993 interaction Effects 0.000 description 33
- 230000008569 process Effects 0.000 description 17
- 238000004422 calculation algorithm Methods 0.000 description 14
- 238000010586 diagram Methods 0.000 description 12
- 238000012545 processing Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 6
- 230000004913 activation Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 4
- 238000004364 calculation method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 230000005236 sound signal Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S3/00—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
- G01S3/80—Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
- G01S3/8006—Multi-channel systems specially adapted for direction-finding, i.e. having a single aerial system capable of giving simultaneous indications of the directions of different signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/21—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being power information
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/48—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
- G10L25/51—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
- G10L25/60—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for measuring the quality of voice signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/08—Speech classification or search
- G10L2015/088—Word spotting
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/03—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
- G10L25/18—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
Definitions
- the embodiments of the present application relate to the technical field of speech processing, and in particular, to a method, a device, and a computer-readable storage medium for azimuth estimation.
- far-field voice interaction usually refers to a distance greater than 1 meter.
- Voice interaction between human and machine is considered to be the most important user traffic entrance in the future. Therefore, both Internet platforms and content service providers attach great importance to the exploration and innovation of speech recognition interfaces.
- the intelligent devices for voice interaction in the field of consumer electronics are mainly smart speakers, smart TVs or TV boxes with voice control functions.
- the main usage scenarios of these products are users' homes or living rooms. In this type of use scenario, the reverberation of the room and the noise in the environment will pose a huge challenge to speech recognition, which will seriously affect the user experience.
- the above-mentioned speech interaction devices are often equipped with multi-microphone arrays and use beamforming algorithms to improve the quality of speech signals.
- the beamforming algorithm needs to give the azimuth of the target speech, and it is very sensitive to the accuracy of the azimuth. Therefore, improving the accuracy of the target speech azimuth estimation has become a bottleneck in improving the performance of the far-field speech recognition system.
- the embodiment of the present application provides a method for azimuth estimation, which is used to improve the accuracy of the azimuth estimation during the voice interaction process.
- the embodiments of the present application also provide corresponding devices and computer-readable storage media.
- a first aspect of the embodiments of the present application provides a method for azimuth estimation, including:
- the terminal device acquires a multi-channel sampling signal and buffers the multi-channel sampling signal
- the terminal device determines that the wake-up word exists according to the wake-up word detection score of each sampled signal, performing spatial spectrum estimation on the buffered multi-channel sampled signal to obtain a spatial spectrum estimation result, the wake-up word Contained in the target speech;
- the terminal device determines an azimuth of the target voice according to the spatial spectrum estimation result and the highest awake word detection score.
- a second aspect of the embodiments of the present application provides a terminal device, including:
- An acquisition unit configured to acquire a multi-channel sampling signal
- a buffering unit configured to buffer the multi-channel sampling signal acquired by the acquiring unit
- a detection unit configured to perform wake-up word detection on each sampling signal in the multi-channel sampling signal buffered by the buffer unit, and determine a wake-up word detection score of each sampling signal
- the spectrum estimation unit is configured to perform spatial spectrum estimation on the buffered multi-channel sampled signal if it is determined that the wake-up word exists according to the wake-up word detection score of the sampling signal determined by the detection unit. Spatial spectrum estimation result, the wake-up word is included in the target speech;
- the determining unit is configured to determine an azimuth of the target voice according to a spatial spectrum estimation result of the spectrum estimating unit and a highest arousal word detection score detected by the detecting unit.
- a third aspect of the embodiments of the present application provides a terminal device.
- the terminal device includes: an input / output (I / O) interface, a processor, and a memory, and the memory stores program instructions;
- the processor is configured to execute program instructions stored in a memory and execute the method according to the first aspect.
- a fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which includes instructions that, when the instructions run on a computer device, cause the computer device to execute the method according to the first aspect.
- Yet another aspect of the embodiments of the present application provides a computer program product containing instructions, which when run on a computer, causes the computer to execute the method described in the first aspect above.
- the embodiment of the present application detects the azimuth of the target voice by using the spatial spectrum estimation result of the multi-sampling signal to assist the highest awakening word score in the multi-sampling signal, thereby avoiding the influence of noise on the detection of the azimuth of the target voice and improving speech interaction. Accuracy of azimuth estimation during the process.
- FIG. 1 is a schematic diagram of a scenario example of human-machine voice interaction in an embodiment of the present application
- FIG. 2 is a schematic diagram of another scenario example of human-machine voice interaction in the embodiment of the present application.
- FIG. 3 is a schematic diagram of an azimuth estimation method according to an embodiment of the present application.
- FIG. 4 is a schematic diagram of another embodiment of an azimuth estimation method according to an embodiment of the present application.
- FIG. 5 is a schematic diagram of another embodiment of a method for estimating an azimuth angle in an embodiment of the present application.
- FIG. 6 is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present application.
- FIG. 7 is a schematic diagram of an embodiment of a terminal device according to an embodiment of the present application.
- FIG. 8 is a schematic diagram of a terminal device according to an embodiment of the present application.
- FIG. 9 is a schematic diagram of a terminal device according to an embodiment of the present application.
- the embodiment of the present application provides a method for azimuth estimation, which is used to improve the accuracy of the azimuth estimation during the voice interaction process.
- the embodiments of the present application also provide corresponding devices and computer-readable storage media. Each of them will be described in detail below.
- the terminal device in the embodiment of the present application is a voice interaction device, and may be a device such as a stereo, a television, a TV box, or a robot with a voice interaction function.
- a wake-up word is generally set in a terminal device having a voice interaction function.
- the wake word is usually a preset word or sentence.
- the voice signal sent by the user is sent as a command to the cloud device for voice interaction services.
- the terminal device samples the sound signal, it will collect sound signals in various directions, which will usually include noise signals, and the noise signal will affect the human-machine voice interaction, so usually the terminal device first determines the user's The azimuth of speech, and then the signals in the azimuth direction are enhanced, and the signals in other directions are suppressed, thereby ensuring smooth human-machine voice interaction. Therefore, in the process of human-computer interaction, it is particularly important to estimate the azimuth of the user's speech.
- FIG. 1 is a schematic diagram of an example of a human-machine voice interaction scenario in an embodiment of the present application.
- the wake-up word is “Hello TV”, and the wake-up word reaches the smart TV 10 through air transmission.
- the smart TV is provided with a multi-array sound receiver, and the receiver may be a microphone.
- Each array can be understood as a channel, each channel will receive a sampling signal, the smart TV 10 will buffer the multi-channel sampling signal, and the wake-up word detection will be performed on each sampling signal in the multi-channel sampling signal.
- the wake word detection score of each sampled signal if it is determined that the wake word exists based on the wake word detection score of each sampled signal, perform spatial spectrum estimation on the buffered multi-channel sampled signal to obtain Spatial spectrum estimation result, the wake-up word is included in the target speech; and according to the spatial spectrum estimation result and the highest wake-up word detection score, the azimuth of the target speech is determined.
- the target voice is the voice of the user who issued the wake-up word, and the target voice includes the wake-up word.
- the embodiment of the present application detects the azimuth of the target voice by using the spatial spectrum estimation result of the multi-sampling signal to assist the highest awakening word score in the multi-sampling signal, thereby avoiding the influence of noise on the detection of the azimuth of the target voice and improving speech interaction Accuracy of azimuth estimation during the process.
- the smart TV 10 can perform voice interaction with the cloud device 20. If during the voice interaction, the user says "langyao list" to the smart TV 10, the smart TV 10 will perform voice recognition on the collected voice signal, or transmit it to the cloud device 20 for voice recognition. The cloud device 20 recognizes that the voice content is After the “Langyabang”, the content related to the TV “Langyabang” will be returned to the smart TV 10.
- an embodiment of the azimuth estimation method provided in the embodiment of the present application includes:
- a terminal device acquires a multi-channel sampling signal and buffers the multi-channel sampling signal.
- the terminal device performs wake word detection on each of the sampling signals in the multi-channel sampling signal, and determines a wake word detection score of each sampling signal.
- the terminal device determines that the wake-up word exists according to the wake-up word detection score of each sampled signal, perform spatial spectrum estimation on the buffered multi-channel sampled signal to obtain a spatial spectrum estimation result, and the wake-up word Contained in the target voice.
- the terminal device determines an azimuth angle of the target voice according to the spatial spectrum estimation result and the highest awake word detection score.
- the embodiment of the present application detects the azimuth of the target voice by using the spatial spectrum estimation result of the multi-sampling signal to assist the highest awakening word score in the multi-sampling signal, thereby avoiding the influence of noise on the detection of the azimuth of the target voice and improving speech interaction. Accuracy of azimuth estimation during the process.
- the terminal device receives the array signal through the microphone, and then divides the received array signal into N beams according to different directions, and each beam signal passes through a path.
- single-channel noise reduction can be performed, that is, the noise on this channel is reduced. Then the wake-up word detection is performed on the sampled signal of each channel.
- the terminal device performing wake word detection on each of the multi-channel sampling signals and determining a wake word detection score of each sampling signal may include:
- the wake-up word detection score of each sampled signal is determined according to the confidence of the wake-up word of each sampled signal.
- the detection of the wake-up word is mainly to detect the similarity between the content of the sampled signal in the channel and the pre-configured wake-up word. If the pre-configured wake-up word is "Hello TV", it is detected in a sample signal.
- the content is "television”, which means that the sampling signal of this channel is similar to the pre-configured wake word to a certain extent, and the wake word detection score of this sample signal can be 5 points. If the content detected in the other sample signal is "TV you”, it means that the sample signal is largely similar to the pre-configured wake word, and the wake word detection score of the sample signal can be 8 points.
- the specific awakening word detection score is calculated by an algorithm, and this is only an example, which should not be understood as a limitation on the awakening word detection score.
- the comprehensive decision scheme may be:
- the terminal device determines the wake word detection score of each sampling signal to determine that the wake word exists.
- the awakening word detection score of the two channels is greater than the scoring threshold of 6 points.
- the scoring threshold is 6 points
- the awakening word detection scores of the 4 channels are 3, 5, 7, and 8 points, respectively
- the awakening word detection score of the two channels is greater than the scoring threshold of 6 points.
- this is only a judgment scheme for determining the presence of arousal words, and it can also be other feasible judgment schemes, for example, determining whether the arousal words exist through the cumulative score of each channel.
- the terminal device may also:
- the terminal device performing spatial spectrum estimation on the buffered multi-channel sampled signal to obtain a spatial spectrum estimation result includes:
- determining the time period in which the awake word appears from the beginning to the end may include:
- a time period from which the awake word appears to end is determined.
- the time point at which the wake word ends is easily determined.
- the point at which the wake word detection score is highest may be the time point at which the wake word ends, and the time point at which the wake word starts to appear may be the wake word detection score
- the wake word detection score At the time when the change started, if there is no wake word, the previous wake word detection score is basically zero.
- the wake word detection score will change, for example, it rises to 1 point.
- the score rises to 2 points the point at which the change first appears can be determined as the time point at which the wake word begins to appear.
- determining the time period of the wake-up word according to the wake-word detection score is only one way. For example, it can also be determined by sampling the energy fluctuation record of the signal, and sampling the signal before and after the user speaks the wake-up word. The energy of is relatively small, so that the time period from the beginning of the increase to the decrease in the plateau can be determined as the time period in which the wake word is.
- the buffer unit buffers the sampling signals, but if the user does not say the wake-up word, it does not make sense for the buffer unit to buffer many sampling signals. Therefore, in order to save the buffer space, in the embodiment of the present application, the cache is cleared according to the length of the cached sampling signal.
- the scheme for clearing the cache may be: for the cached multi-channel sampling signal, keeping the latest (M + N) time Sampling signals of length, delete sampling signals outside the (M + N) time length, where M is the duration occupied by the wake-up word, and N is a preset duration.
- the cache unit will always cache the newly collected sampling signal that is longer than the time taken by the wake-up word, so that it can ensure that the wake-up word is cached and can effectively save the cache space.
- the azimuth estimation unit After determining that the wake-up word exists, the azimuth estimation unit is activated. If it is determined that the time when the wake-up word starts to appear is t 0 and the time when the wake-up word ends is t 1 , the azimuth estimation unit extracts t 0 to target time period t 1 signal sampling, the spatial spectrum estimation and target sampling signal.
- the terminal device performing spatial spectrum estimation on the target sampled signal to obtain a spatial spectrum estimation result may include: calculating signal power intensities at multiple candidate azimuth angles according to the target sampled signal.
- the azimuth estimation unit calculates the spatial spectrum using the target sampled signal in the time period from t 0 to t 1.
- the spatial spectrum is the signal power intensity corresponding to multiple candidate azimuth angles, that is, in each candidate direction. .
- the choice of the alternative direction angle is determined by the use scenario and the estimation accuracy requirements. For example, when a ring microphone array is used and the azimuth estimation accuracy is required to be 10 degrees, the alternative direction can be selected as 0 °, 10 °, 20 °, ..., 350 °; when a linear microphone array is used and the azimuth estimation accuracy is required to be At 30 degrees, the alternative directions can be selected as 0 °, 30 °, 60 °, ..., 180 °.
- the multiple candidate azimuth angles may be labeled as ⁇ 1 , ⁇ 2 ,..., ⁇ K , where K is the number of candidate azimuth angles.
- the spatial spectrum estimation algorithm estimates the signal power intensity in each candidate direction, which is written as: P 1 , P 2 , ..., P K.
- the spatial spectrum estimation algorithm can use Super-Cardioid fixed beamforming algorithm or other spatial spectrum estimation algorithms, which will not be discussed in detail here.
- the terminal device determining the azimuth of the target voice according to the spatial spectrum estimation result and the highest awake word detection score may include:
- An azimuth angle of the target voice is determined according to an azimuth angle of the target main beam and the local maximum point.
- determining the azimuth of the target voice based on the azimuth of the target main beam and the local maximum point may include:
- the root determines the average value of the candidate azimuth angles corresponding to the at least two local maximum points as The azimuth of the target speech.
- the process of azimuth estimation may include two processes: spatial spectrum estimation, and spatial spectrum and arousal word detection score judgment.
- the space spectrum estimation result and the arousal word detection score (recorded as: S 1 , S 2 , ..., S N ) can be integrated to remove the interference of the strong noise on the space spectrum.
- a feasible solution may be to determine the highest arousal word detection score S * and the main beam direction ⁇ * of its pre-fixed beamforming algorithm.
- a higher arousal word score represents better target speech quality and less noise residue, then the direction of the target speech is near ⁇ * .
- the one closest to ⁇ * is found, and the corresponding alternative azimuth angle is recorded as ⁇ * , and ⁇ * is an estimate of the azimuth angle of the target speech.
- the local maximum points may be caused by noise interference, and the corresponding alternative azimuth angle represents the direction of the point source interference noise in the environment.
- the wake-up word in human-computer interaction has its natural minimum length limit, which is denoted as Tmin, there will be no second wake-up within Tmin time after one wake-up. Therefore, it is possible to save the amount of wake-up word detection calculations for azimuth estimation during this time.
- the method may further include:
- the terminal device stops performing wake-up word detection on each of the multi-channel sampled signals within a length of time from when the wake-up word is determined to exist again.
- the multi-channel wake word detection module continues to run until the wake word is detected, the azimuth estimation module does not perform any operation, and the voice signal processing module does not perform any processing but only performs internal state tracking.
- the calculation of all multi-path wake-up word detection modules is stopped during the time period from Ts to Ts + Tmin, which may include a pre-fixed beamforming algorithm, a noise reduction algorithm, and a single-path wake-up word detection module. .
- the spatial spectrum estimation algorithm is used to perform spatial spectrum estimation to obtain better spatial spectrum estimation performance and resolution. Combined with the wake word detection score at Ts, the optimal azimuth of the target speech is finally obtained. .
- the system can reduce the peak operation amount of the system, reduce the system delay, possible dropped frames, and discontinuous signals.
- the azimuth estimation is not performed before the awake word is detected.
- the target sampling signal is extracted from the buffer unit for a time period from t 0 to t 1 and the possible azimuth angle of the speech signal is estimated. Integrate the estimation result and the score of the multi-channel wake word detection module to obtain the final azimuth angle estimation result of the target voice, and output the azimuth angle of the target voice to the voice signal processing module, so that the voice signal processing module performs the voice interaction process.
- the signal in the azimuth direction of the target voice can be enhanced, and signals in other directions can be suppressed, thereby ensuring smooth voice interaction.
- the voice signal processing module only performs internal state tracking before detecting the wake-up word, such as echo cancellation, noise intensity, voice detection, etc., and does not perform any processing on multi-channel sampled signals.
- the azimuth angle of the speech signal newly estimated by the azimuth angle estimation module is used as the target direction of a speech processing algorithm such as beamforming to perform target speech signal enhancement and output an enhanced signal To the speech recognition module.
- the speech recognition module does not perform any recognition operation before detecting the awake word. After receiving the activation signal provided by the wake word score comprehensive judgment module, the enhanced target voice signal provided by the voice signal processing module is recognized and the recognition result is provided until the recognition is completed.
- the terminal device 40 provided in this embodiment of the present application includes one or more processors and one or more memories storing a program unit, where the program unit is executed by the processor, and the program The units include:
- the obtaining unit 401 is configured to obtain a multi-channel sampling signal
- the buffering unit 402 is configured to buffer the multi-channel sampling signal acquired by the acquiring unit 401;
- the detection unit 403 is configured to perform wake-up word detection on each sampling signal in the multi-channel sampling signal buffered by the buffer unit 402, and determine a wake-up word detection score of each sampling signal;
- the spectrum estimation unit 404 is configured to perform spatial spectrum estimation on the buffered multi-channel sampled signal if it is determined that the wake-up word exists according to the wake-word detection score of each sampled signal determined by the detection unit 403, To obtain a spatial spectrum estimation result, the wake-up word is included in the target speech;
- the determining unit 405 is configured to determine the azimuth of the target voice according to the spatial spectrum estimation result of the spectrum estimating unit 404 and the highest arousal word detection score detected by the detecting unit.
- the embodiment of the present application detects the azimuth of the target voice by using the spatial spectrum estimation result of the multi-sampling signal to assist the highest awakening word score in the multi-sampling signal, thereby avoiding the influence of noise on the detection of the azimuth of the target voice and improving speech interaction Accuracy of azimuth estimation during the process.
- the determining unit 405 is further configured to determine a time period in which the wake-up word appears from the beginning to the end;
- the spectrum estimation unit 404 is set to:
- the spectrum estimation unit 404 is configured to calculate signal power intensities at multiple candidate azimuths according to the target sampled signal.
- the spectrum estimation unit 404 is set to:
- An azimuth angle of the target voice is determined according to an azimuth angle of the target main beam and the local maximum point.
- the spectrum estimation unit 404 is set to:
- the candidate azimuth corresponding to the local maximum point closest to the azimuth of the target main beam is determined as the azimuth of the target speech.
- the spectrum estimation unit 404 is set to:
- the root determines the average value of the candidate azimuth angles corresponding to the at least two local maximum points as The azimuth of the target speech.
- the determining unit 405 is configured to:
- a time period from which the awake word appears to end is determined.
- the terminal device 40 provided in the embodiment of the present application further includes a control unit 406,
- the control unit 406 is configured to stop detecting the wake-up word for each of the multi-channel sampled signals within a period of time from the time when the wake-up word is determined to exist again.
- the detection unit 403 is configured to:
- the wake-up word detection score of each sampled signal is determined according to the confidence of the wake-up word of each sampled signal.
- the determining unit 405 is further configured to: when the wake word detection score of any one of the sampling signals is greater than a score threshold, determine that the wake word detection score of each sampling signal exists The wake word.
- the terminal device 40 provided in the embodiment of the present application further includes a cleaning unit 407,
- the cleaning unit 407 is configured to retain the latest (M + N) sampling signal for the buffered multi-channel sampling signal, and delete the sampling signal outside the (M + N) timing, so that Let M be the duration of the wake-up word, and N be the preset duration.
- the terminal device 40 described in the foregoing embodiment may be understood by referring to the corresponding descriptions in FIG. 1 to FIG. 5, and details are not repeated herein.
- FIG. 9 is a schematic structural diagram of a terminal device 50 according to an embodiment of the present application.
- the terminal device 50 includes a processor 510, a memory 540, and an input / output (I / O) interface 530.
- the memory 540 may include a read-only memory and a random access memory, and provide the processor 510 with operation instructions and data.
- a part of the memory 540 may further include a non-volatile random access memory (NVRAM).
- NVRAM non-volatile random access memory
- the memory 540 stores the following elements, executable modules or data structures, or their subsets, or their extended sets:
- the operation instruction may be stored in the operating system
- the wake-up word detection score of each sampled signal If it is determined that the wake-up word exists according to the wake-up word detection score of each sampled signal, perform spatial spectrum estimation on the buffered multi-channel sampled signal to obtain a spatial spectrum estimation result, and the wake-up word is included in the target speech ;
- An azimuth angle of the target voice is determined according to the spatial spectrum estimation result and the highest arousal word detection score.
- the embodiment of the present application detects the azimuth of the target voice by using the spatial spectrum estimation result of the multi-sampling signal to assist the highest awakening word score in the multi-sampling signal, thereby avoiding the influence of noise on the detection of the azimuth of the target voice and improving speech interaction. Accuracy of azimuth estimation during the process.
- the processor 510 controls the operation of the terminal device 50, and the processor 510 may also be referred to as a central processing unit (CPU).
- the memory 540 may include a read-only memory and a random access memory, and provide instructions and data to the processor 510. A part of the memory 540 may further include a non-volatile random access memory (NVRAM).
- NVRAM non-volatile random access memory
- various components of the terminal device 50 are coupled together through a bus system 520.
- the bus system 520 may include a power bus, a control bus, and a status signal bus in addition to a data bus. However, for the sake of clarity, various buses are marked as the bus system 520 in the figure.
- the methods disclosed in the foregoing embodiments of the present application may be applied to the processor 510, or implemented by the processor 510.
- the processor 510 may be an integrated circuit chip and has a signal processing capability. In the implementation process, each step of the above method may be completed by an integrated logic circuit of hardware in the processor 510 or an instruction in the form of software.
- the processor 510 may be a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, or discrete hardware Components.
- DSP digital signal processor
- ASIC application specific integrated circuit
- FPGA off-the-shelf programmable gate array
- Various methods, steps, and logical block diagrams disclosed in the embodiments of the present application can be implemented or executed.
- a general-purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
- the steps of the method disclosed in combination with the embodiments of the present application may be directly implemented by a hardware decoding processor, or may be performed by using a combination of hardware and software modules in the decoding processor.
- the software module may be located in a mature storage medium such as a random access memory, a flash memory, a read-only memory, a programmable read-only memory, or an electrically erasable programmable memory, a register, and the like.
- the storage medium is located in the memory 540, and the processor 510 reads the information in the memory 540 and completes the steps of the foregoing method in combination with its hardware.
- the processor 510 is configured to:
- the processor 510 is configured to:
- the processor 510 is configured to:
- An azimuth angle of the target voice is determined according to an azimuth angle of the target main beam and the local maximum point.
- the processor 510 is configured to:
- the candidate azimuth corresponding to the local maximum point closest to the azimuth of the target main beam is determined as the azimuth of the target speech.
- the processor 510 is configured to:
- the root determines the average value of the candidate azimuth angles corresponding to the at least two local maximum points as The azimuth of the target speech.
- the processor 510 is configured to:
- a time period from which the awake word appears to end is determined.
- processor 510 is further configured to:
- the processor 510 is configured to:
- the wake-up word detection score of each sampled signal is determined according to the confidence of the wake-up word of each sampled signal.
- processor 510 is further configured to:
- the wake word detection score of any one of the sampling signals is greater than the score threshold, it is determined that the wake word detection score of each sampling signal determines that the wake word exists.
- processor 510 is further configured to:
- the latest sampling signal of the (M + N) time length is retained, and the sampling signal outside the (M + N) time length is deleted, where M is the duration occupied by the wake-up word , Where N is a preset duration.
- terminal device 50 The description of the terminal device 50 above can be understood with reference to the description in the parts of FIG. 1 to FIG. 5, and is not repeated here.
- the computer program product includes one or more computer instructions.
- the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
- the computer instructions may be stored in a computer-readable storage medium, or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, the computer instructions may be from a website site, computer, server, or data center Transmission via wired (such as coaxial cable, optical fiber, digital subscriber line (DSL)) or wireless (such as infrared, wireless, microwave, etc.) to another website site, computer, server, or data center.
- wired such as coaxial cable, optical fiber, digital subscriber line (DSL)
- wireless such as infrared, wireless, microwave, etc.
- the computer-readable storage medium may be any available medium that can be stored by a computer or a data storage device such as a server, a data center, and the like that includes one or more available medium integration.
- the usable medium may be a magnetic medium (for example, a floppy disk, a hard disk, a magnetic tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state disk (Solid State Disk)).
- the program may be stored in a computer-readable storage medium.
- the storage medium may include: ROM, RAM, disk or optical disc, etc.
- the embodiment of the present application detects the azimuth of the target voice by using the spatial spectrum estimation result of the multi-sampling signal to assist the highest awakening word score in the multi-sampling signal, thereby avoiding the influence of noise on the detection of the azimuth of the target voice and improving speech interaction. Accuracy of azimuth estimation during the process.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- General Physics & Mathematics (AREA)
- Quality & Reliability (AREA)
- Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
Abstract
一种方位角估计的方法,包括:终端设备获取多通路采样信号并缓存多通路采样信号(301);终端设备对多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分(302);若终端设备根据每路采样信号的唤醒词检测得分确定存在唤醒词,则对缓存的多通路采样信号进行空间谱估计,以得到空间谱估计结果,唤醒词包含于目标语音(303);终端设备根据空间谱估计结果和最高的唤醒词检测得分,确定目标语音的方位角(304)。还涉及一种终端设备、计算机可读存储介质。
Description
本申请要求于2018年08月06日提交中国专利局、申请号为201810887965.5、发明名称“一种方位角估计的方法、设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请实施例涉及语音处理技术领域,具体涉及一种方位角估计的方法、设备及计算机可读存储介质。
随着智能音箱及其衍生品的流行,人机之间的语音交互,尤其是远场语音交互,逐渐成为了一个重要的研究方向。在语音交互领域,远场语音交互通常是指距离大于1米。人机之间的语音交互被认为是未来最重要的用户流量入口。因此,互联网平台和内容服务商都高度重视对语音识别接口的探索与创新。
目前消费电子领域的语音交互智能设备主要是智能音箱,带语音控制功能的智能电视或电视盒子等产品。这些产品的主要使用场景都是用户的家庭或客厅。在这类使用场景中,房间的混响以及环境中的噪音都会对语音识别造成巨大的挑战,进而严重影响用户的使用体验。
为了实现更好的远场语音识别性能,上述语音交互设备往往都装备有多麦克风阵列并利用波束形成算法提升语音信号质量。但为了达到最优的性能,波束形成算法需要给定目标语音的方位角,且对该方位角的准确度非常敏感。因此,提升目标语音方位角估计的准确性便成为了提升远场语音识别系统性能的一个瓶颈。
发明内容
本申请实施例提供一种方位角估计的方法,用于提高语音交互过程中 方位角估计的准确性。本申请实施例还提供了相应的设备及计算机可读存储介质。
本申请实施例第一方面提供一种方位角估计的方法,包括:
终端设备获取多通路采样信号并缓存所述多通路采样信号;
所述终端设备对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分;
若所述终端设备根据所述每路采样信号的唤醒词检测得分确定存在所述唤醒词,则对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,所述唤醒词包含于目标语音;
所述终端设备根据所述空间谱估计结果和最高的唤醒词检测得分,确定所述目标语音的方位角。
本申请实施例第二方面提供一种终端设备,包括:
获取单元,被设置为获取多通路采样信号;
缓存单元,被设置为缓存所述获取单元获取的所述多通路采样信号;
检测单元,被设置为对所述缓存单元缓存的多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分;
谱估计单元,被设置为若根据所述检测单元确定的所述每路采样信号的唤醒词检测得分确定存在所述唤醒词,则对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,所述唤醒词包含于目标语音;
确定单元,被设置为根据所述谱估计单元的空间谱估计结果和所述检测单元检测出的最高的唤醒词检测得分,确定所述目标语音的方位角。
本申请实施例第三方面提供一种终端设备,所述终端设备包括:输入/输出(I/O)接口、处理器和存储器,所述存储器中存储有程序指令;
所述处理器被设置为执行存储器中存储的程序指令,执行如上述第一 方面所述的方法。
本申请实施例第四方面提供一种计算机可读存储介质,包括指令,当所述指令在计算机设备上运行时,使得所述计算机设备执行如上述第一方面所述的方法。
本申请实施例的又一方面提供了一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行上述第一方面所述的方法。
本申请实施例采用多路采样信号中最高的唤醒词得分辅助多路采样信号的空间谱估计结果来检测目标语音的方位角,从而避免了噪声对目标语音方位角检测的影响,提高了语音交互过程中方位角估计的准确性。
图1是本申请实施例中人机语音交互的一场景示例示意图;
图2是本申请实施例中人机语音交互的另一场景示例示意图;
图3是本申请实施例中方位角估计的方法的一实施例示意图;
图4是本申请实施例中方位角估计的方法的另一实施例示意图;
图5是本申请实施例中方位角估计的方法的另一实施例示意图;
图6是本申请实施例中终端设备的一实施例示意图;
图7是本申请实施例中终端设备的一实施例示意图;
图8是本申请实施例中终端设备的一实施例示意图;
图9是本申请实施例中终端设备的一实施例示意图。
下面结合附图,对本申请的实施例进行描述,显然,所描述的实施例仅仅是本申请一部分的实施例,而不是全部的实施例。本领域普通技术人员可知,随着技术的发展和新场景的出现,本申请实施例提供的技术方案对于类似的技术问题,同样适用。
本申请实施例提供一种方位角估计的方法,用于提高语音交互过程中方位角估计的准确性。本申请实施例还提供了相应的设备及计算机可读存储介质。以下分别进行详细说明。
本申请实施例中的终端设备为语音交互设备,可以是具有语音交互功能的音响、电视、电视盒子或者机器人等设备。
作为用户隐私保护和降低整机功耗的一种手段,具有语音交互功能的终端设备中一般都会设置一个唤醒词。唤醒词通常是预先设定的一个词或一句话。当用户说出唤醒词并被终端设备检测到以后,用户发出的语音信号才被当作命令发送给云端设备进行语音交互服务。因为终端设备在对声音信号进行采样时,会采集到各个方向上的声音信号,其中会通常会包括噪声信号,而噪声信号会对人机语音交互造成影响,所以通常终端设备会先确定用户发出语音的方位角,然后对该方位角方向上的信号进行增强,其他方向上的信号进行抑制,从而保证顺畅的人机语音交互。所以,在人机交互过程中,针对用户发出语音的方位角的估计就显得尤为重要。
图1为本申请实施例中的人机语音交互场景的一示例示意图。
如图1所示,用户在要唤醒具有语音交互功能的智能电视10时,可以说出唤醒词,如该场景中,唤醒词为“电视你好”,该唤醒词经过空气传输到达智能电视10,智能电视中设置有多阵列的声音接收器,该接收器可以是麦克风。每个阵列可以理解为是一个通路,每个通路会接收到一路采样信号,智能电视10会缓存所述多通路采样信号,人后对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分;若根据所述每路采样信号的唤醒词检测得分确定存在所述唤醒词,则对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,所述唤醒词包含于目标语音;根据所述空间谱估计结果和最高的唤醒词检测得分,确定所述目标语音的方位角。
目标语音为发出唤醒词的用户的语音,目标语音中包括唤醒词。
本申请实施例采用多路采样信号中最高的唤醒词得分辅助多路采样信号的空间谱估计结果来检测目标语音的方位角,从而避免了噪声对目标语音方位角检测的影响,提高了语音交互过程中方位角估计的准确性。
在确定目标语音的方位角后,如图2所示,智能电视10就可以与云 端设备20进行语音交互了。若语音交互过程中,用户对智能电视10说了“琅琊榜”,智能电视10会将采集到的语音信号进行语音识别,或者传输给云端设备20进行语音识别,云端设备20识别出语音内容是“琅琊榜”后,会向智能电视10返回与电视机“琅琊榜”相关的内容。
以上结合场景示例对本申请实施例中的方位角估计和语音交互做了简单的描述,下面结合图3介绍本申请实施例中的方位角估计的方法。
如图3所示,本申请实施例提供的方位角估计的方法的一实施例包括:
301、终端设备获取多通路采样信号并缓存所述多通路采样信号。
302、终端设备对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分。
303、若终端设备根据所述每路采样信号的唤醒词检测得分确定存在所述唤醒词,则对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,所述唤醒词包含于目标语音。
304、终端设备根据所述空间谱估计结果和最高的唤醒词检测得分,确定所述目标语音的方位角。
本申请实施例采用多路采样信号中最高的唤醒词得分辅助多路采样信号的空间谱估计结果来检测目标语音的方位角,从而避免了噪声对目标语音方位角检测的影响,提高了语音交互过程中方位角估计的准确性。
本申请实施例提供的方位角估计的方法还可以参阅图4进行理解。如图4所示,终端设备会通过麦克风接收到阵列信号,然后将接收到的阵列信号按照不同方向划分为N束,每束信号经过一个通路,如图4中所示,N束分别为从方向1到方向N,例如N=4,则可以是0度方向为方向1、90度方向为方向2、180度方向为方向3、270度方向为方向4。对于每个通路上的采样信号都可以进行单通路降噪,也就是降低该通路上的噪音。然后再对每个通路的采样信号进行唤醒词检测。
其中,可选地,所述终端设备对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分,可以包括:
对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定所述每路采样信号的唤醒词的置信度,所述置信度为所述每路采样信号中的内容与预配置的唤醒词的相似程度;
根据所述每路采样信号的唤醒词的置信度确定所述每路采样信号的唤醒词检测得分。
也就是说,对唤醒词检测主要是检测该通路中的采样信号中的内容与预配置的唤醒词的相似程度,如果预配置的唤醒词为“电视你好”,一路采样信号中检测到的内容为“电视”,则表示该路采样信号与预配置的唤醒词一定程度上相似,该路采样信号的唤醒词检测得分可以为5分。若另外一路采样信号中检测到的内容为“电视你”,则表示该路采样信号与预配置的唤醒词很大程度上相似,该路采样信号的唤醒词检测得分可以为8分。当然,具体的唤醒词检测得分是通过算法计算得到的,该处只是举例说明,不应将其理解为是对唤醒词检测得分的限定。
终端设备检测出每路采样信号的唤醒词检测得分后,需要根据每路采样信号的唤醒词检测得分进行综合判决,综合判决的方案可以是:
当所述每路采样信号中有任意一路采样信号的唤醒词检测得分大于得分阈值,则终端设备确定所述每路采样信号的唤醒词检测得分确定存在所述唤醒词。
例如:若得分阈值为6分,4个通路的唤醒词检测得分分别为3分、5分、7分和8分,则有两个通路的唤醒词检测得分大于得分阈值6分,则可以确定存在唤醒词。当然,这只是确定存在唤醒词的一种判断方案,还可以是其他的可行性判断方案,例如:通过各个通路的累计得分确定是否存在唤醒词。
确定存在唤醒词后,就可以激活方位角评估、语音信号处理和语音识别几个功能。
另外,终端设备在确定存在唤醒词后,还可以:
确定所述唤醒词从开始出现到结束所处的时间段;
从缓存的所述多通路采样信号中提取出所述时间段内的目标采样信号;
对应的,所述终端设备对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,包括:
对所述目标采样信号进行空间谱估计,以得到空间谱估计结果。
当确定了唤醒词从开始出现到结束所处的时间段,则在对缓存的所述 多通路采样信号进行空间谱估计时,只需要提取该时间段内容的目标采样信号就好,不需要对缓存的全部采样信号都做估计,这样可以减少空间谱估计时的计算量。
其中,所述确定所述唤醒词从开始出现到结束所处的时间段,可以包括:
确定所述唤醒词结束的时间点;
根据所述唤醒词结束的时间点,以及所述唤醒词的得分变化记录或者采样信号的能量波动记录,确定所述唤醒词开始出现的时间点;
根据所述唤醒词开始出现的时间点和所述唤醒词结束的时间点,确定所述唤醒词从开始出现到结束所处的时间段。
本申请实施例中,唤醒词结束的时间点是很容易确定的,如:唤醒词检测得分最高的点就可以是唤醒词结束的时间点,唤醒词开始出现的时间点可以是唤醒词检测得分开始出现变化的时间点,如果没有出现唤醒词,那么之前的唤醒词检测得分基本是趋于零的,当有唤醒词出现时,则唤醒词检测得分就会出现变化,例如:升到了1分,升到了2分,则最开始出现变化的点就可以确定为是唤醒词开始出现的时间点。
另外,需要说明的是,根据唤醒词检测得分来确定唤醒词所处的时间段只是一种方式,例如还可以是:通过采样信号的能量波动记录来确定,在用户说出唤醒词前后采样信号的能量相对会比较小,这样就可以把能量从开始升高到降低趋于平稳的时间段确定为是唤醒词所处的时间段。
本申请实施例中,缓存单元是会缓存采样信号的,但如果用户没有说出唤醒词,缓存单元缓存很多采样信号也没有意义。所以,为了节省缓存空间,本申请实施例中会按照缓存的采样信号的长度清理缓存,该清理缓存的方案可以是:对于缓存的所述多通路采样信号,保留最新的(M+N)时间长度的采样信号,删除所述(M+N)时间长度之外的采样信号,所述M为所述唤醒词占用时长,所述N为预置时长。
也就是说,缓存单元中会一直缓存最新采集到的大于唤醒词所占用时间长度的采样信号,这样即可以确保缓存了唤醒词,又可以有效的节省缓存空间。
在确定存在唤醒词后,方位角估计单元被激活,若确定出唤醒词开始 出现的时刻为t
0,唤醒词结束的时刻为t
1,则该方位角估计单元从缓存单元中提取t
0到t
1时间段内的目标采样信号,并对该目标采样信号进行空间谱估计。
其中,所述终端设备对所述目标采样信号进行空间谱估计,以得到空间谱估计结果,可以包括:根据所述目标采样信号,计算出多个备选方位角上信号功率强度。
方位角估计单元在接收到激活信号后,使用t
0至t
1时间段的目标采样信号计算出空间谱,空间谱也就是多个备选方位角所对应的即各个备选方向的信号功率强度。
备选方向角的选择是由使用场景和估计精度需求决定。比如,当使用环形麦克风阵列且方位角估计精度要求为10度时,备选方向可以选择为0°,10°,20°,…,350°;当使用线性麦克风阵列且方位角估计精度要求为30度时,备选方向可以选择为0°,30°,60°,…,180°。在本申请实施例中,可以将该多个备选方位角标记为θ
1,θ
2,...,θ
K,其中K是备选方位角的个数。空间谱估计算法估计出每一个备选方向上的信号功率强度,记为:P
1,P
2,...,P
K。空间谱估计算法可以采用Super-Cardioid固定波束形成算法或者其它空间谱估计算法,此处不作详细讨论。
在完成空间谱估计后,可选地,所述终端设备根据所述空间谱估计结果和最高的唤醒词检测得分,确定所述目标语音的方位角,可以包括:
确定目标主波束的方位角度,所述目标主波束为所述最高的唤醒词检测得分所对应采样信号的主波束;
确定所述多个备选方位角上信号功率强度中的局部极大值点;
根据所述目标主波束的方位角度和所述局部极大值点,确定所述目标语音的方位角。
其中,所述根据所述目标主波束的方位角度和所述局部极大值点,确定所述目标语音的方位角,可以包括:
将与所述目标主波束的方位角度最接近的局部极大值点所对应的备选方位角,确定为所述目标语音的方位角;或者,
若与所述目标主波束的方位角度最接近的局部极大值点有至少两个,则根将所述至少两个局部极大值点各自所对应的备选方位角的平均值确 定为所述目标语音的方位角。
也就是说,本申请实施例中,如图5所示,方位角估计的过程中可以包括空间谱估计,以及空间谱和唤醒词检测得分判断两个过程。在综合判断过程中可以使用空间谱估计结果和唤醒词检测得分(记为:S
1,S
2,...,S
N)进行综合以去除强噪声对空间谱产生的干扰。其中,可行的方案可以是确定最高的唤醒词检测得分S
*和其前置固定波束形成算法的主波束方向β
*。更高的唤醒词得分代表更好的目标语音质量和更小的噪声残留,那么目标语音的方向是在β
*的附近。在空间谱的所有局部极大值点中找到离β
*最近的那一个,其对应的备选方位角记为θ
*,θ
*即为对目标语音的方位角的估计。
当环境中存在强噪声时,上述算法设计中的空间谱可能存在多个局部极大值点。其中的一个或多个局部极大值点可能是由噪声干扰而来,其所对应的备选方位角代表的是环境当中的点源干扰噪声方向。通过β
*在方位角上的辅助,可以滤除掉这些噪声产生的干扰,例如:在90度方向和270度方向都各自有一个局部极大值点,若根据最高的唤醒词检测得分S
*和其前置固定波束形成算法的主波束方向β
*=60°,则可以选择到90度方向的局部极大值点,从而准确的确定到目标语音的方位角为90度。
另外,因为人机交互中唤醒词有其天然的最小长度限制,记为Tmin,在一次唤醒之后的Tmin时间内不会再出现第二次唤醒。因此,可以节省这段时间内的唤醒词检测运算量用于方位角估计。
因此,可选地,本申请实施例中,所述终端设备对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果时,所述方法还可以包括:
终端设备在确定存在所述唤醒词到唤醒词再次出现的时间长度内,停止对所述多通路采样信号中每路采样信号进行唤醒词检测。
所以,本申请实施例中,在检测到唤醒词之前,多通路唤醒词检测模块持续运行,方位角估计模块不做任何运算,语音信号处理模块不做任何处理而只作内部状态跟踪。
当在Ts时刻检测到唤醒词,则在Ts到Ts+Tmin时间段内停止所有多通路唤醒词检测模块的计算,其中可以包括前置固定波束形成算法、降噪 算法和单通路唤醒词检测模块。
在Ts到Ts+Tmin时间段内采用空间谱估计算法进行空间谱估计,得到更好的空间谱估计性能和分辨率,结合Ts时刻的唤醒词检测得分,最终得到最优的目标语音的方位角。
通过上述分时进行唤醒词检测和方位角估计的方案,可以减少系统峰值运算量,降低系统延迟和可能的丢帧、信号不连续等现象。
另外需要说明的是,本申请实施例中,方位角估计在检测到唤醒词之前不做任何计算。在接收到唤醒词模块提供的激活信号后,从缓存单元中提取t
0至t
1时间段的目标采样信号,并估计可能的语音信号方位角。综合该估计结果和多通路唤醒词检测模块的得分得到最终的目标语音的方位角估计结果,并将该目标语音的方位角度输出给语音信号处理模块,使语音信号处理模块在进行语音交互的过程中可以增强该目标语音的方位角方向的信号,抑制其他方向的信号,从而确保顺畅的语音交互。
语音信号处理模块在检测到唤醒词之前只作内部状态跟踪,例如:回声消除、噪声强度、语音检测等,而不对多通路的采样信号作任何处理。在接收到唤醒词得分综合判断模块提供的激活信号后,使用方位角估计模块最新估计出的语音信号的方位角作为波束形成等语音处理算法的目标方向,进行目标语音信号增强,输出增强的信号给到语音识别模块。
语音识别模块在检测到唤醒词之前不进行任何识别运算。在接收到唤醒词得分综合判断模块提供的激活信号后,识别语音信号处理模块提供的经过增强的目标语音信号,并提供识别结果,直至识别结束。
以上多个实施例描述了语音交互过程中方位角估计的方法,下面结合附图描述本申请实施例中的终端设备。
如图6所示,本申请实施例提供的终端设备40包括一个或多个处理器,以及一个或多个存储程序单元的存储器,其中,所述程序单元由所述处理器执行,所述程序单元包括:
获取单元401,被设置为获取多通路采样信号;
缓存单元402,被设置为缓存所述获取单元401获取的所述多通路采样信号;
检测单元403,被设置为对所述缓存单元402缓存的多通路采样信号 中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分;
谱估计单元404,被设置为若根据所述检测单元403确定的所述每路采样信号的唤醒词检测得分确定存在所述唤醒词,则对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,所述唤醒词包含于目标语音;
确定单元405,被设置为根据所述谱估计单元404的空间谱估计结果和所述检测单元检测出的最高的唤醒词检测得分,确定所述目标语音的方位角。
本申请实施例采用多路采样信号中最高的唤醒词得分辅助多路采样信号的空间谱估计结果来检测目标语音的方位角,从而避免了噪声对目标语音方位角检测的影响,提高了语音交互过程中方位角估计的准确性。
可选地,确定单元405还被设置为确定所述唤醒词从开始出现到结束所处的时间段;
谱估计单元404被设置为:
从缓存的所述多通路采样信号中提取出所述时间段内的目标采样信号;
对所述目标采样信号进行空间谱估计,以得到空间谱估计结果。
可选地,谱估计单元404被设置为:根据所述目标采样信号,计算出多个备选方位角上信号功率强度。
可选地,谱估计单元404被设置为:
确定目标主波束的方位角度,所述目标主波束为所述最高的唤醒词检测得分所对应采样信号的主波束;
确定所述多个备选方位角上信号功率强度中的局部极大值点;
根据所述目标主波束的方位角度和所述局部极大值点,确定所述目标语音的方位角。
可选地,谱估计单元404被设置为:
将与所述目标主波束的方位角度最接近的局部极大值点所对应的备选方位角,确定为所述目标语音的方位角。
可选地,谱估计单元404被设置为:
若与所述目标主波束的方位角度最接近的局部极大值点有至少两个, 则根将所述至少两个局部极大值点各自所对应的备选方位角的平均值确定为所述目标语音的方位角。
可选地,确定单元405被设置为:
确定所述唤醒词结束的时间点;
根据所述唤醒词结束的时间点,以及所述唤醒词的得分变化记录或者采样信号的能量波动记录,确定所述唤醒词开始出现的时间点;
根据所述唤醒词开始出现的时间点和所述唤醒词结束的时间点,确定所述唤醒词从开始出现到结束所处的时间段。
可选地,如图7所示,本申请实施例提供的终端设备40还包括控制单元406,
所述控制单元406,被设置为在确定存在所述唤醒词到唤醒词再次出现的时间长度内,停止对所述多通路采样信号中每路采样信号进行唤醒词检测。
可选地,检测单元403被设置为:
对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定所述每路采样信号的唤醒词的置信度,所述置信度为所述每路采样信号中的内容与预配置的唤醒词的相似程度;
根据所述每路采样信号的唤醒词的置信度确定所述每路采样信号的唤醒词检测得分。
可选地,确定单元405还被设置为:当所述每路采样信号中有任意一路采样信号的唤醒词检测得分大于得分阈值,则确定所述每路采样信号的唤醒词检测得分确定存在所述唤醒词。
可选地,如图8所示,本申请实施例提供的终端设备40还包括清理单元407,
所述清理单元407,被设置为对于缓存的所述多通路采样信号,保留最新的(M+N)时间长度的采样信号,删除所述(M+N)时间长度之外的采样信号,所述M为所述唤醒词占用时长,所述N为预置时长。
以上实施例所描述的终端设备40可以参阅图1至图5部分的相应描述进行理解,本处不再重复赘述。
图9是本申请实施例提供的终端设备50的结构示意图。所述终端设备50包括处理器510、存储器540和输入输出(I/O)接口530,存储器540可以包括只读存储器和随机存取存储器,并向处理器510提供操作指令和数据。存储器540的一部分还可以包括非易失性随机存取存储器(NVRAM)。
在一些实施方式中,存储器540存储了如下的元素,可执行模块或者数据结构,或者他们的子集,或者他们的扩展集:
在本申请实施例中,在方位角估计的过程中,通过调用存储器540存储的操作指令(该操作指令可存储在操作系统中),
获取多通路采样信号并缓存所述多通路采样信号;
对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分;
若根据所述每路采样信号的唤醒词检测得分确定存在所述唤醒词,则对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,所述唤醒词包含于目标语音;
根据所述空间谱估计结果和最高的唤醒词检测得分,确定所述目标语音的方位角。
本申请实施例采用多路采样信号中最高的唤醒词得分辅助多路采样信号的空间谱估计结果来检测目标语音的方位角,从而避免了噪声对目标语音方位角检测的影响,提高了语音交互过程中方位角估计的准确性。
处理器510控制终端设备50的操作,处理器510还可以称为中央处理单元(Central Processing Unit,CPU)。存储器540可以包括只读存储器和随机存取存储器,并向处理器510提供指令和数据。存储器540的一部分还可以包括非易失性随机存取存储器(NVRAM)。具体的应用中终端设备50的各个组件通过总线系统520耦合在一起,其中总线系统520除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。但是为了清楚说明起见,在图中将各种总线都标为总线系统520。
上述本申请实施例揭示的方法可以应用于处理器510中,或者由处理器510实现。处理器510可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器510中的硬件的集成逻 辑电路或者软件形式的指令完成。上述的处理器510可以是通用处理器、数字信号处理器(DSP)、专用集成电路(ASIC)、现成可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。可以实现或者执行本申请实施例中的公开的各方法、步骤及逻辑框图。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。软件模块可以位于随机存储器,闪存、只读存储器,可编程只读存储器或者电可擦写可编程存储器、寄存器等本领域成熟的存储介质中。该存储介质位于存储器540,处理器510读取存储器540中的信息,结合其硬件完成上述方法的步骤。
可选地,处理器510被设置为:
确定所述唤醒词从开始出现到结束所处的时间段;
从缓存的所述多通路采样信号中提取出所述时间段内的目标采样信号;
对所述目标采样信号进行空间谱估计,以得到空间谱估计结果。
可选地,处理器510被设置为:
根据所述目标采样信号,计算出多个备选方位角上信号功率强度。
可选地,处理器510被设置为:
确定目标主波束的方位角度,所述目标主波束为所述最高的唤醒词检测得分所对应采样信号的主波束;
确定所述多个备选方位角上信号功率强度中的局部极大值点;
根据所述目标主波束的方位角度和所述局部极大值点,确定所述目标语音的方位角。
可选地,处理器510被设置为:
将与所述目标主波束的方位角度最接近的局部极大值点所对应的备选方位角,确定为所述目标语音的方位角。
可选地,处理器510被设置为:
若与所述目标主波束的方位角度最接近的局部极大值点有至少两个,则根将所述至少两个局部极大值点各自所对应的备选方位角的平均值确 定为所述目标语音的方位角。
可选地,处理器510被设置为:
确定所述唤醒词结束的时间点;
根据所述唤醒词结束的时间点,以及所述唤醒词的得分变化记录或者采样信号的能量波动记录,确定所述唤醒词开始出现的时间点;
根据所述唤醒词开始出现的时间点和所述唤醒词结束的时间点,确定所述唤醒词从开始出现到结束所处的时间段。
可选地,处理器510还被设置为:
在确定存在所述唤醒词到唤醒词再次出现的时间长度内,停止对所述多通路采样信号中每路采样信号进行唤醒词检测。
可选地,处理器510被设置为:
对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定所述每路采样信号的唤醒词的置信度,所述置信度为所述每路采样信号中的内容与预配置的唤醒词的相似程度;
根据所述每路采样信号的唤醒词的置信度确定所述每路采样信号的唤醒词检测得分。
可选地,处理器510还被设置为:
当所述每路采样信号中有任意一路采样信号的唤醒词检测得分大于得分阈值,则确定所述每路采样信号的唤醒词检测得分确定存在所述唤醒词。
可选地,处理器510还被设置为:
对于缓存的所述多通路采样信号,保留最新的(M+N)时间长度的采样信号,删除所述(M+N)时间长度之外的采样信号,所述M为所述唤醒词占用时长,所述N为预置时长。
上对终端设备50的描述可以参阅图1至图5部分的描述进行理解,本处不再重复赘述。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。
所述计算机程序产品包括一个或多个计算机指令。在计算机上加载 和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(DSL))或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如固态硬盘(Solid State Disk,SSD))等。
本领域普通技术人员可以理解上述实施例的各种方法中的全部或部分步骤是可以通过程序来指令相关的硬件来完成,该程序可以存储于一计算机可读存储介质中,存储介质可以包括:ROM、RAM、磁盘或光盘等。
以上对本申请实施例所提供的方位角估计的方法、终端设备以及计算机可读存储介质进行了详细介绍,本文中应用了具体个例对本申请实施例的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本申请的方法及其核心思想;同时,对于本领域的一般技术人员,依据本申请实施例的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本申请实施例的限制。
本申请实施例采用多路采样信号中最高的唤醒词得分辅助多路采样信号的空间谱估计结果来检测目标语音的方位角,从而避免了噪声对目标语音方位角检测的影响,提高了语音交互过程中方位角估计的准确性。
Claims (14)
- 一种方位角估计的方法,包括:终端设备获取多通路采样信号并缓存所述多通路采样信号;所述终端设备对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分;若所述终端设备根据所述每路采样信号的唤醒词检测得分确定存在所述唤醒词,则对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,所述唤醒词包含于目标语音;所述终端设备根据所述空间谱估计结果和最高的唤醒词检测得分,确定所述目标语音的方位角。
- 根据权利要求1所述的方法,其中,所述方法还包括:所述终端设备确定所述唤醒词从开始出现到结束所处的时间段;所述终端设备从缓存的所述多通路采样信号中提取出所述时间段内的目标采样信号;所述对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,包括:对所述目标采样信号进行空间谱估计,以得到空间谱估计结果。
- 根据所述权利要求2所述的方法,其中,所述对所述目标采样信号进行空间谱估计,以得到空间谱估计结果,包括:根据所述目标采样信号,计算出多个备选方位角上信号功率强度。
- 根据所述权利要求3所述的方法,其中,所述终端设备根据所述空间谱估计结果和最高的唤醒词检测得分,确定所述目标语音的方位角,包括:确定目标主波束的方位角度,所述目标主波束为所述最高的唤醒词检测得分所对应采样信号的主波束;确定所述多个备选方位角上信号功率强度中的局部极大值点;根据所述目标主波束的方位角度和所述局部极大值点,确定所述目标语音的方位角。
- 根据所述权利要求4所述的方法,其中,所述根据所述目标主波束的方位角度和所述局部极大值点,确定所述目标语音的方位角,包括:将与所述目标主波束的方位角度最接近的局部极大值点所对应的备选方位角,确定为所述目标语音的方位角。
- 根据所述权利要求4所述的方法,其中,所述根据所述目标主波束的方位角度和所述局部极大值点,确定所述目标语音的方位角,包括:若与所述目标主波束的方位角度最接近的局部极大值点有至少两个,则根将所述至少两个局部极大值点各自所对应的备选方位角的平均值确定为所述目标语音的方位角。
- 根据权利要求2-6任一所述的方法,其中,所述终端设备确定所述唤醒词从开始出现到结束所处的时间段,包括:确定所述唤醒词结束的时间点;根据所述唤醒词结束的时间点,以及所述唤醒词的得分变化记录或者采样信号的能量波动记录,确定所述唤醒词开始出现的时间点;根据所述唤醒词开始出现的时间点和所述唤醒词结束的时间点,确定所述唤醒词从开始出现到结束所处的时间段。
- 根据权利要求2-6任一所述的方法,其中,所述对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果时,所述方法还包括:所述终端设备在确定存在所述唤醒词到唤醒词再次出现的时间长度内,停止对所述多通路采样信号中每路采样信号进行唤醒词检测。
- 根据权利要求1-6任一所述的方法,其中,所述终端设备对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分,包括:对所述多通路采样信号中每路采样信号进行唤醒词检测,并确定所述每路采样信号的唤醒词的置信度,所述置信度为所述每路采样信号中的内容与预配置的唤醒词的相似程度;根据所述每路采样信号的唤醒词的置信度确定所述每路采样信号的唤醒词检测得分。
- 根据权利要求1-6任一所述的方法,其中,所述方法还包括:当所述每路采样信号中有任意一路采样信号的唤醒词检测得分大于得分阈值,则所述终端设备确定所述每路采样信号的唤醒词检测得分确定存在所述唤醒词。
- 根据权利要求1-6任一所述的方法,其中,所述方法还包括:对于缓存的所述多通路采样信号,所述终端设备保留最新的(M+N)时间长度的采样信号,删除所述(M+N)时间长度之外的采样信号,所述M为所述唤醒词占用时长,所述N为预置时长。
- 一种终端设备,包括一个或多个处理器,以及一个或多个存储程序单元的存储器,其中,所述程序单元由所述处理器执行,所述程序单元包括:获取单元,被设置为获取多通路采样信号;缓存单元,被设置为缓存所述获取单元获取的所述多通路采样信号;检测单元,被设置为对所述缓存单元缓存的多通路采样信号中每路采样信号进行唤醒词检测,并确定每路采样信号的唤醒词检测得分;谱估计单元,被设置为若根据所述检测单元确定的所述每路采样信号的唤醒词检测得分确定存在所述唤醒词,则对缓存的所述多通路采样信号进行空间谱估计,以得到空间谱估计结果,所述唤醒词包含于目标语音;确定单元,被设置为根据所述谱估计单元的空间谱估计结果和所述检测单元检测出的最高的唤醒词检测得分,确定所述目标语音的方位角。
- 一种终端设备,所述终端设备包括:输入/输出(I/O)接口、处理器和存储器,所述存储器中存储有程序指令;所述处理器被设置为执行存储器中存储的程序指令,执行如权利要求1-11任一所述的方法。
- 一种计算机可读存储介质,包括指令,当所述指令在计算机设备上运行时,使得所述计算机设备执行如权利要求1-11中任一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP19846963.7A EP3836136B1 (en) | 2018-08-06 | 2019-08-02 | Azimuth estimation method, device, and storage medium |
US17/006,440 US11908456B2 (en) | 2018-08-06 | 2020-08-28 | Azimuth estimation method, device, and storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810887965.5A CN110164423B (zh) | 2018-08-06 | 2018-08-06 | 一种方位角估计的方法、设备及存储介质 |
CN201810887965.5 | 2018-08-06 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/006,440 Continuation US11908456B2 (en) | 2018-08-06 | 2020-08-28 | Azimuth estimation method, device, and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020029882A1 true WO2020029882A1 (zh) | 2020-02-13 |
Family
ID=67645177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/099049 WO2020029882A1 (zh) | 2018-08-06 | 2019-08-02 | 一种方位角估计的方法、设备及存储介质 |
Country Status (5)
Country | Link |
---|---|
US (1) | US11908456B2 (zh) |
EP (1) | EP3836136B1 (zh) |
CN (1) | CN110164423B (zh) |
TW (1) | TWI711035B (zh) |
WO (1) | WO2020029882A1 (zh) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110517677B (zh) * | 2019-08-27 | 2022-02-08 | 腾讯科技(深圳)有限公司 | 语音处理系统、方法、设备、语音识别系统及存储介质 |
CN111276143B (zh) * | 2020-01-21 | 2023-04-25 | 北京远特科技股份有限公司 | 声源定位方法、装置、语音识别控制方法和终端设备 |
WO2022043675A2 (en) | 2020-08-24 | 2022-03-03 | Unlikely Artificial Intelligence Limited | A computer implemented method for the automated analysis or use of data |
KR20220037846A (ko) * | 2020-09-18 | 2022-03-25 | 삼성전자주식회사 | 음성 인식을 수행하기 위한 전자 장치를 식별하기 위한 전자 장치 및 그 동작 방법 |
CN112201259B (zh) * | 2020-09-23 | 2022-11-25 | 北京百度网讯科技有限公司 | 声源定位方法、装置、设备和计算机存储介质 |
CN113281727B (zh) * | 2021-06-02 | 2021-12-07 | 中国科学院声学研究所 | 一种基于水平线列阵的输出增强的波束形成方法及其系统 |
CN113593548B (zh) * | 2021-06-29 | 2023-12-19 | 青岛海尔科技有限公司 | 智能设备的唤醒方法和装置、存储介质及电子装置 |
US11977854B2 (en) | 2021-08-24 | 2024-05-07 | Unlikely Artificial Intelligence Limited | Computer implemented methods for the automated analysis or use of data, including use of a large language model |
US12073180B2 (en) | 2021-08-24 | 2024-08-27 | Unlikely Artificial Intelligence Limited | Computer implemented methods for the automated analysis or use of data, including use of a large language model |
US11989527B2 (en) | 2021-08-24 | 2024-05-21 | Unlikely Artificial Intelligence Limited | Computer implemented methods for the automated analysis or use of data, including use of a large language model |
US11989507B2 (en) | 2021-08-24 | 2024-05-21 | Unlikely Artificial Intelligence Limited | Computer implemented methods for the automated analysis or use of data, including use of a large language model |
US12067362B2 (en) | 2021-08-24 | 2024-08-20 | Unlikely Artificial Intelligence Limited | Computer implemented methods for the automated analysis or use of data, including use of a large language model |
WO2023196695A1 (en) * | 2022-04-07 | 2023-10-12 | Stryker Corporation | Wake-word processing in an electronic device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6707910B1 (en) * | 1997-09-04 | 2004-03-16 | Nokia Mobile Phones Ltd. | Detection of the speech activity of a source |
US20150379990A1 (en) * | 2014-06-30 | 2015-12-31 | Rajeev Conrad Nongpiur | Detection and enhancement of multiple speech sources |
CN106251877A (zh) * | 2016-08-11 | 2016-12-21 | 珠海全志科技股份有限公司 | 语音声源方向估计方法及装置 |
CN106531179A (zh) * | 2015-09-10 | 2017-03-22 | 中国科学院声学研究所 | 一种基于语义先验的选择性注意的多通道语音增强方法 |
CN106611597A (zh) * | 2016-12-02 | 2017-05-03 | 百度在线网络技术(北京)有限公司 | 基于人工智能的语音唤醒方法和装置 |
CN108122556A (zh) * | 2017-08-08 | 2018-06-05 | 问众智能信息科技(北京)有限公司 | 减少驾驶人语音唤醒指令词误触发的方法及装置 |
CN108122563A (zh) * | 2017-12-19 | 2018-06-05 | 北京声智科技有限公司 | 提高语音唤醒率及修正doa的方法 |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8323189B2 (en) * | 2006-05-12 | 2012-12-04 | Bao Tran | Health monitoring appliance |
CA2766062C (en) * | 2009-06-19 | 2014-09-23 | Research In Motion Limited | Reference signal design for wireless communication system |
JP5289517B2 (ja) * | 2011-07-28 | 2013-09-11 | 株式会社半導体理工学研究センター | センサネットワークシステムとその通信方法 |
US9818407B1 (en) * | 2013-02-07 | 2017-11-14 | Amazon Technologies, Inc. | Distributed endpointing for speech recognition |
US20160371977A1 (en) * | 2014-02-26 | 2016-12-22 | Analog Devices, Inc. | Apparatus, systems, and methods for providing intelligent vehicular systems and services |
US9940949B1 (en) * | 2014-12-19 | 2018-04-10 | Amazon Technologies, Inc. | Dynamic adjustment of expression detection criteria |
EP3067884B1 (en) * | 2015-03-13 | 2019-05-08 | Samsung Electronics Co., Ltd. | Speech recognition system and speech recognition method thereof |
US10192546B1 (en) * | 2015-03-30 | 2019-01-29 | Amazon Technologies, Inc. | Pre-wakeword speech processing |
US9699549B2 (en) * | 2015-03-31 | 2017-07-04 | Asustek Computer Inc. | Audio capturing enhancement method and audio capturing system using the same |
GB201506046D0 (en) * | 2015-04-09 | 2015-05-27 | Sinvent As | Speech recognition |
KR101627264B1 (ko) * | 2015-08-10 | 2016-06-03 | 주식회사 홍인터내셔날 | 복수의 카메라를 구비한 다트 게임 장치 및 컴퓨터-판독가능 매체에 저장된 컴퓨터 프로그램 |
US9805714B2 (en) * | 2016-03-22 | 2017-10-31 | Asustek Computer Inc. | Directional keyword verification method applicable to electronic device and electronic device using the same |
US10109294B1 (en) * | 2016-03-25 | 2018-10-23 | Amazon Technologies, Inc. | Adaptive echo cancellation |
US10431211B2 (en) * | 2016-07-29 | 2019-10-01 | Qualcomm Incorporated | Directional processing of far-field audio |
CN109565317B (zh) * | 2016-08-19 | 2022-06-17 | 苹果公司 | 用于移动通信系统的波束精化和控制信令 |
CN107910013B (zh) | 2017-11-10 | 2021-09-24 | Oppo广东移动通信有限公司 | 一种语音信号的输出处理方法及装置 |
US10959029B2 (en) * | 2018-05-25 | 2021-03-23 | Sonos, Inc. | Determining and adapting to changes in microphone performance of playback devices |
WO2020145688A1 (en) * | 2019-01-10 | 2020-07-16 | Samsung Electronics Co., Ltd. | Electronic device and controlling method thereof |
WO2021015308A1 (ko) * | 2019-07-19 | 2021-01-28 | 엘지전자 주식회사 | 로봇 및 그의 기동어 인식 방법 |
US11482224B2 (en) * | 2020-05-20 | 2022-10-25 | Sonos, Inc. | Command keywords with input detection windowing |
US11915708B2 (en) * | 2021-03-18 | 2024-02-27 | Samsung Electronics Co., Ltd. | Methods and systems for invoking a user-intended internet of things (IoT) device from a plurality of IoT devices |
-
2018
- 2018-08-06 CN CN201810887965.5A patent/CN110164423B/zh active Active
-
2019
- 2019-08-02 WO PCT/CN2019/099049 patent/WO2020029882A1/zh unknown
- 2019-08-02 EP EP19846963.7A patent/EP3836136B1/en active Active
- 2019-08-06 TW TW108127934A patent/TWI711035B/zh active
-
2020
- 2020-08-28 US US17/006,440 patent/US11908456B2/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6707910B1 (en) * | 1997-09-04 | 2004-03-16 | Nokia Mobile Phones Ltd. | Detection of the speech activity of a source |
US20150379990A1 (en) * | 2014-06-30 | 2015-12-31 | Rajeev Conrad Nongpiur | Detection and enhancement of multiple speech sources |
CN106531179A (zh) * | 2015-09-10 | 2017-03-22 | 中国科学院声学研究所 | 一种基于语义先验的选择性注意的多通道语音增强方法 |
CN106251877A (zh) * | 2016-08-11 | 2016-12-21 | 珠海全志科技股份有限公司 | 语音声源方向估计方法及装置 |
CN106611597A (zh) * | 2016-12-02 | 2017-05-03 | 百度在线网络技术(北京)有限公司 | 基于人工智能的语音唤醒方法和装置 |
CN108122556A (zh) * | 2017-08-08 | 2018-06-05 | 问众智能信息科技(北京)有限公司 | 减少驾驶人语音唤醒指令词误触发的方法及装置 |
CN108122563A (zh) * | 2017-12-19 | 2018-06-05 | 北京声智科技有限公司 | 提高语音唤醒率及修正doa的方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3836136A4 * |
Also Published As
Publication number | Publication date |
---|---|
EP3836136B1 (en) | 2023-07-19 |
EP3836136A1 (en) | 2021-06-16 |
TW202008352A (zh) | 2020-02-16 |
CN110164423B (zh) | 2023-01-20 |
TWI711035B (zh) | 2020-11-21 |
EP3836136A4 (en) | 2021-09-08 |
CN110164423A (zh) | 2019-08-23 |
US20200395005A1 (en) | 2020-12-17 |
US11908456B2 (en) | 2024-02-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2020029882A1 (zh) | 一种方位角估计的方法、设备及存储介质 | |
US10469967B2 (en) | Utilizing digital microphones for low power keyword detection and noise suppression | |
CN109671433B (zh) | 一种关键词的检测方法以及相关装置 | |
CN107577449B (zh) | 唤醒语音的拾取方法、装置、设备及存储介质 | |
CN110556103B (zh) | 音频信号处理方法、装置、系统、设备和存储介质 | |
US10602267B2 (en) | Sound signal processing apparatus and method for enhancing a sound signal | |
US10601599B2 (en) | Voice command processing in low power devices | |
CN108962263B (zh) | 一种智能设备控制方法及系统 | |
US9460735B2 (en) | Intelligent ancillary electronic device | |
CN107464565B (zh) | 一种远场语音唤醒方法及设备 | |
EP2994911B1 (en) | Adaptive audio frame processing for keyword detection | |
JP6450139B2 (ja) | 音声認識装置、音声認識方法、及び音声認識プログラム | |
US20220358909A1 (en) | Processing audio signals | |
US20230333205A1 (en) | Sound source positioning method and apparatus | |
WO2020043037A1 (zh) | 语音转录设备、系统、方法、及电子设备 | |
CN111048118B (zh) | 一种语音信号处理方法、装置及终端 | |
WO2021253235A1 (zh) | 语音活动检测方法和装置 | |
US11783809B2 (en) | User voice activity detection using dynamic classifier | |
CN111048096B (zh) | 一种语音信号处理方法、装置及终端 | |
CN111710341A (zh) | 语音切割点检测方法及其装置、介质和电子设备 | |
JP2020024310A (ja) | 音声処理システム及び音声処理方法 | |
US11823707B2 (en) | Sensitivity mode for an audio spotting system | |
US12057138B2 (en) | Cascade audio spotting system | |
CN116364087A (zh) | 一种提高唤醒率的方法、系统、设备及存储介质 | |
CN114203136A (zh) | 回声消除方法、语音识别方法、语音唤醒方法和装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19846963 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019846963 Country of ref document: EP Effective date: 20210309 |