WO2018032760A1 - 一种语音信息处理方法和装置 - Google Patents

一种语音信息处理方法和装置 Download PDF

Info

Publication number
WO2018032760A1
WO2018032760A1 PCT/CN2017/077537 CN2017077537W WO2018032760A1 WO 2018032760 A1 WO2018032760 A1 WO 2018032760A1 CN 2017077537 W CN2017077537 W CN 2017077537W WO 2018032760 A1 WO2018032760 A1 WO 2018032760A1
Authority
WO
WIPO (PCT)
Prior art keywords
voice information
information
feature
preset
voice
Prior art date
Application number
PCT/CN2017/077537
Other languages
English (en)
French (fr)
Inventor
魏兴
Original Assignee
中兴通讯股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中兴通讯股份有限公司 filed Critical 中兴通讯股份有限公司
Priority to EP17840737.5A priority Critical patent/EP3499502A1/en
Publication of WO2018032760A1 publication Critical patent/WO2018032760A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/02Feature extraction for speech recognition; Selection of recognition unit
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/02Preprocessing operations, e.g. segment selection; Pattern representation or modelling, e.g. based on linear discriminant analysis [LDA] or principal components; Feature selection or extraction
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/06Decision making techniques; Pattern matching strategies
    • G10L17/08Use of distortion metrics or a particular distance between probe pattern and reference templates
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L17/00Speaker identification or verification techniques
    • G10L17/26Recognition of special voice characteristics, e.g. for use in lie detectors; Recognition of animal voices
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/03Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters
    • G10L25/18Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 characterised by the type of extracted parameters the extracted parameters being spectral information of each sub-band
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command

Definitions

  • the present invention relates to voice processing technologies in the field of communications, and in particular, to a voice information processing method and apparatus.
  • voice technology is widely used in people's work, entertainment, sports and other lifestyles.
  • Google integrated new voice dictation tools in its Google Docs application enabling users to get rid of traditional keyboard input human-computer interaction;
  • Microsoft and Apple will also each carry their own voice products on the terminal device Cortana and Siri Integrated into their respective computer systems; even some smartphones or wearable devices can interact with terminal devices via voice technology.
  • the existing speech recognition technology mainly uses: local or cloud to convert the language information contained in the user's voice information into text and compare the corresponding text in the sampled data, and simultaneously compare the two segments of the sound with frequency resonance. In order to achieve the purpose of distinguishing different users.
  • the existing speech recognition technology only makes some empirical identification of certain "physical features" in the user's voice, without considering the user's speaking habits, mood and other factors caused by changes in the sound frequency, resulting in a sampled user voice.
  • the frequency error is large, and the target user cannot be accurately identified.
  • the operation of some users is not within the actual operation authority, resulting in poor user experience.
  • the embodiment of the present invention provides a voice information processing method and apparatus, which at least partially solves the problem that the target user cannot be accurately identified according to the voice information in the prior art.
  • a voice information processing method comprising:
  • the method before the analyzing the first voice information to obtain the first feature information and the second feature information of the first voice information, the method further includes:
  • the analyzing process on the first voice information to obtain first feature information and second feature of the first voice information information;
  • the first voice information is re-acquired.
  • the analyzing, by the analyzing, the first voice information, the first feature information and the second feature information of the first voice information including:
  • the first feature information and the second feature information of the first voice information are used to determine a relationship between the first voice information and the preset voice information, and determine whether to execute the device according to the determination result.
  • the operation corresponding to the first voice information includes:
  • the first feature coefficient is smaller than the first threshold and the second feature coefficient is smaller than the second threshold, determining that the first voice information matches the preset voice information and performing the first The operation corresponding to the voice information.
  • determining that the first voice information matches the preset voice information and performing the The operation corresponding to the first voice information includes:
  • the first feature coefficient is smaller than the first threshold and the second feature coefficient is smaller than the second threshold, determining that the first voice information matches the preset voice information and acquiring a preset of the preset voice information Operation authority
  • the method further includes:
  • the frequency of the first voice information is within the preset frequency range, setting an operation authority of the user corresponding to the first voice information.
  • a voice information processing apparatus comprising: a first acquiring unit, a second acquiring unit, and a first processing unit; wherein:
  • the first acquiring unit is configured to acquire first voice information
  • the second acquiring unit is configured to perform analysis processing on the first voice information to obtain first feature information and second feature information of the first voice information;
  • the first processing unit is configured to determine, according to the first feature information and the second feature information of the first voice information, a relationship between the first voice information and the preset voice information, and determine, according to the determination result, whether An operation corresponding to the first voice information is performed.
  • the device further includes: a third obtaining unit, a first determining unit, and a second processing unit; wherein:
  • the third acquiring unit is configured to acquire a first time domain waveform corresponding to the first voice information
  • the first determining unit is configured to determine whether the first time domain waveform of the first voice information is continuous
  • the second processing unit is configured to perform the analyzing process on the first voice information to obtain the first voice information if the first time domain waveform of the first voice information is continuous
  • the second processing unit is further configured to re-acquire the first voice information if the first time domain waveform of the first voice information is discontinuous.
  • the second obtaining unit includes: a first acquiring module and a second acquiring module; wherein:
  • the first acquiring module is configured to perform spectrum analysis on the first time domain waveform of the first voice information to obtain a frequency domain waveform of the first voice information;
  • the first acquiring module is further configured to acquire the first feature information of the first voice information according to the frequency domain waveform of the first voice information;
  • the second acquiring module is configured to filter the first time domain waveform of the first voice information and perform processing by using a delay compensation mechanism to obtain a second time domain waveform of the first voice information.
  • the second acquiring module is further configured to acquire the second feature information of the first voice information according to the second time domain waveform of the first voice information.
  • the first processing unit includes: a third acquiring module, a determining module, and a processing module; wherein:
  • the third acquiring module is configured to analyze a relationship between the first feature information of the first voice information and the first feature information of the preset voice information, to obtain a first feature coefficient of the first voice information. ;
  • the third acquiring module is further configured to analyze a relationship between the second feature information of the first voice information and the second feature information of the preset voice information, to obtain a second feature of the first voice information. coefficient;
  • the determining module is configured to determine whether the first feature coefficient is less than a first threshold and the second feature coefficient is less than a second threshold;
  • the processing module is configured to determine that the first voice information matches the preset voice information if the first feature coefficient is smaller than the first threshold and the second feature coefficient is smaller than the second threshold And performing an operation corresponding to the first voice information.
  • processing module is further configured to:
  • the first feature coefficient is smaller than the first threshold and the second feature coefficient is smaller than the second threshold, determining that the first voice information matches the preset voice information and acquiring a preset of the preset voice information Operation authority
  • the device further includes: a fourth acquiring unit, a second determining unit, and a setting unit; wherein
  • the fourth acquiring unit is configured to perform spectrum analysis on the first time domain waveform of the first voice information, and acquire a frequency of the first voice information;
  • the second determining unit is configured to determine whether the frequency of the first voice information is within a preset frequency range
  • the setting unit is configured to set an operation authority of a user corresponding to the first voice information if a frequency of the first voice information is within the preset frequency range.
  • the voice information processing method and apparatus can acquire the first voice information, and then analyze and process the first voice information to obtain first feature information and second feature information of the first voice information, and are based on the first Determining a relationship between the first voice information and the preset voice information by using the first feature information and the second feature information of the voice information, and finally determining, according to the determination result, whether to perform an operation corresponding to the first voice information;
  • the voice information is recognized, the first feature information and the second feature information of the user voice information can be simultaneously recognized to identify the relationship between the user voice information and the preset voice information, which solves the problem that the target user cannot be accurately identified according to the voice information in the prior art.
  • the problem is that the target user can be accurately identified and the user's operation authority can be accurately matched, and the operation of some users is prevented from occurring within the actual operation authority, thereby improving the interaction capability between the user and the device.
  • FIG. 1 is a schematic flowchart of a voice information processing method according to an embodiment of the present invention
  • FIG. 2 is a schematic flowchart diagram of another voice information processing method according to an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart diagram of still another method for processing voice information according to an embodiment of the present disclosure
  • FIG. 4 is a schematic flowchart diagram of still another method for processing voice information according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a voice information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 6 is a schematic structural diagram of another voice information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 7 is a schematic structural diagram of still another voice information processing apparatus according to an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of a voice information processing apparatus according to another embodiment of the present invention.
  • FIG. 9 is a schematic structural diagram of another voice information processing apparatus according to another embodiment of the present invention.
  • An embodiment of the present invention provides a voice information processing method. Referring to FIG. 1, the method includes the following steps:
  • Step 101 Acquire first voice information.
  • obtaining the first voice information in step 101 can be implemented by the voice information processing apparatus.
  • the voice information processing device may be a smart phone, a navigator, a tablet computer, a smart TV, a smart refrigerator, a smart relay, an air conditioner, and the like capable of performing voice recognition and performing corresponding operations; the first voice information may be controlled by the user.
  • the smart device performs real-time voice information related to the operation, and the first voice information may be collected from the user to start the collection, and the collection is stopped after the user stops talking for more than one period of time, and the time period may be set by the user according to his or her own wishes, for example, It can be 5 seconds.
  • Step 102 Perform analysis processing on the first voice information to obtain first feature information and second feature information of the first voice information.
  • the step 102 analyzes the first voice information, and the first feature information and the second feature information of the first voice information are obtained by the voice information processing device.
  • the first feature information may include physical feature information of a sound such as a timbre, a resonance, and a resonance mode of the sound
  • the second feature information may include behavior characteristic information of the sound such as the volume of the sound, the speed of the speech, and the like.
  • Step 103 Determine, according to the first feature information and the second feature information of the first voice information, a relationship between the first voice information and the preset voice information, and determine, according to the determination result, whether to perform an operation corresponding to the first voice information.
  • the step 103 determines, according to the first feature information and the second feature information of the first voice information, a relationship between the first voice information and the preset voice information, and determines, according to the determination result, whether to perform the correspondence with the first voice information.
  • the operation can be implemented by a voice information processing device.
  • the voice information processing method can obtain the first voice information, and then analyze and process the first voice information to obtain first feature information and second feature information of the first voice information, and based on the first voice.
  • the first feature information and the second feature information of the information determine a relationship between the first voice information and the preset voice information, and finally determine, according to the determination result, whether to perform an operation corresponding to the first voice information; thus, performing user voice information
  • the first feature information and the second feature information of the user voice information can be simultaneously recognized to identify the relationship between the user voice information and the preset voice information, which solves the problem that the target user cannot be accurately identified according to the voice information in the prior art. It can accurately identify the target user and accurately match the user's operation authority, avoiding the situation that some users' operations are not within the actual operation authority, and improving the interaction ability between the user and the device.
  • the embodiment of the invention provides a voice information processing method. Referring to FIG. 2, the method includes the following steps:
  • Step 201 The voice information processing apparatus acquires the first voice information.
  • Step 202 The voice information processing apparatus acquires a first time domain waveform corresponding to the first voice information.
  • the first time domain waveform corresponding to the first voice information is an original waveform of the collected first voice information that is not processed.
  • Step 203 The voice information processing apparatus determines whether the first time domain waveform of the first voice information is continuous.
  • the first voice information that is, the time domain waveform of the real-time voice information sent by the user is continuous within the receiving time.
  • the time domain waveform of the real-time voice information sent by the user is a continuous signal during the receiving time, and the time domain waveform of the recorded voice information is obtained by sampling by the digital device, and the corresponding time domain waveform is received within the receiving time.
  • the time domain waveform is discontinuous overall.
  • the step 203 is to determine whether the first time domain waveform of the first voice information is continuous, and may perform step 204 or steps 205-206. If the first time domain waveform of the first voice information is discontinuous, step 204 is performed. If the first time domain waveform of the first voice information is continuous, performing steps 205-206;
  • Step 204 If the first time domain waveform of the first voice information is discontinuous, the voice information processing device reacquires the first voice information.
  • the first voice information may be directly deleted, and the voice information may be re-acquired. Avoid misuse.
  • Step 205 If the first time domain waveform of the first voice information is continuous, the voice information processing apparatus performs analysis processing on the first voice information to obtain first feature information and second feature information of the first voice information.
  • the spectrum analysis of the first time domain waveform of the first voice information may be performed.
  • a frequency domain waveform of the voice information obtaining first feature information such as a timbre, a resonance, a resonance mode of the first voice information from the frequency domain waveform of the first voice information; and simultaneously from the first time domain waveform of the first voice information
  • second feature information such as a volume level of the first voice information, a speed of the speech, and the like.
  • Step 206 The voice information processing device determines, according to the first feature information and the second feature information of the first voice information, a relationship between the first voice information and the preset voice information, and determines whether to perform the first voice information according to the determination result. Corresponding operation.
  • the preset voice information is at least one user voice information that is recorded in advance and saved in the local device of the smart device or the corresponding cloud; when the preset voice information is recorded and sampled in the local system, the preset is guaranteed as much as possible.
  • the compression ratio of the high sampling file can be used to reduce the user's network usage fee; the preset voice information stored in the cloud can be communicated with the cloud through the smart device using the wireless network, and the local system is The obtained preset voice information is stored in the cloud.
  • the voice information processing method provided by the embodiment of the present invention can obtain the first voice information, and then analyze and process the first voice information to obtain first feature information and second feature information of the first voice information, and based on the first voice.
  • the first feature information and the second feature information of the information determine a relationship between the first voice information and the preset voice information, and finally determine, according to the determination result, whether to perform an operation corresponding to the first voice information; thus, performing user voice information
  • the first feature information and the second feature information of the user voice information can be simultaneously recognized to identify the relationship between the user voice information and the preset voice information, which solves the problem that the target user cannot be accurately identified according to the voice information in the prior art.
  • An embodiment of the present invention provides a voice information processing method. Referring to FIG. 3, the method includes the following steps:
  • Step 301 The voice information processing apparatus acquires the first voice information.
  • Step 302 The voice information processing apparatus acquires a first time domain waveform corresponding to the first voice information.
  • Step 303 The voice information processing apparatus determines whether the first time domain waveform of the first voice information is continuous.
  • the voice information processing apparatus determines whether the first time domain waveform of the first voice information is continuous, and may perform step 304 or steps 305-312, if the first time domain waveform of the first voice information is discontinuous. Step 304, if the first time domain waveform of the first voice information is continuously performed steps 305-312;
  • Step 304 If the first time domain waveform of the first voice information is discontinuous, the voice information processing device reacquires the first voice information.
  • Step 305 If the first time domain waveform of the first voice information is continuous, the voice information processing apparatus performs spectrum analysis on the first time domain waveform of the first voice information to obtain a frequency domain waveform of the first voice information.
  • the first time domain waveform of the first voice information may be spectrally analyzed by using a Fourier transform method to obtain a frequency domain waveform of the first voice information.
  • Step 306 The voice information processing apparatus acquires first feature information of the first voice information according to the frequency domain waveform of the first voice information.
  • the first feature analysis may be performed on the frequency domain waveform of the first voice information to obtain the first feature information of the first voice information, where the specific implementation method for analyzing the frequency domain waveform to obtain the first feature information may refer to the current There are technical implementations, which are not described here.
  • Step 307 The voice information processing device filters the first time domain waveform of the first voice information and performs processing by using a delay compensation mechanism to obtain a second time domain waveform of the first voice information.
  • the first and last blank signals of the first time domain waveform of the first voice information may be filtered, and then the first time domain waveform of the filtered first voice information is processed by using a delay compensation mechanism to obtain a second voice.
  • the second time domain waveform of the information wherein the time domain waveform of the second time domain waveform and the preset voice information can be dynamically consistent from the waveform distribution, the peak trough spacing, the time stamp, and the like.
  • Step 308 The voice information processing apparatus acquires second feature information of the first voice information according to the second time domain waveform of the first voice information.
  • performing second feature analysis on the second time domain waveform of the first voice information to obtain second feature information of the first voice information may refer to There are technical implementations, which are not described here.
  • Step 309 The voice information processing apparatus analyzes a relationship between the first feature information of the first voice information and the first feature information of the preset voice information, to obtain a first feature coefficient of the first voice information.
  • the first feature information of the first voice information and the first feature information of the preset voice information may be subtracted and an absolute value is obtained to obtain a first feature coefficient of the first voice information, and of course, the first voice information is analyzed.
  • the relationship between the first feature information and the first feature information of the preset voice information may also adopt other methods adopted in the prior art, and is not limited to the implementation manner proposed by the present invention.
  • the first feature information acquiring method of the preset voice information may be consistent with the first feature information acquiring method of the first voice information.
  • Step 310 The voice information processing apparatus analyzes a relationship between the second feature information of the first voice information and the second feature information of the preset voice information, to obtain a second feature coefficient of the first voice information.
  • the second feature information of the first voice information and the second feature information of the preset voice information may be subtracted and an absolute value is obtained to obtain a second feature coefficient of the first voice information, and of course, the first voice information is analyzed.
  • the relationship between the second feature information and the second feature information of the preset voice information may also adopt other methods adopted in the prior art, and is not limited to the implementation manner proposed by the present invention.
  • the second feature information acquiring method of the preset voice information may be consistent with the second feature information acquiring method of the first voice information.
  • Step 311 The voice information processing apparatus determines whether the first feature coefficient is less than the first threshold and the second feature coefficient is less than the second threshold.
  • the first threshold may be a value set for all the first feature coefficients, or different values may be set corresponding to different first feature coefficients, for example, the first feature information according to the first voice information may be set (timbre)
  • the first threshold value of the three first characteristic coefficients obtained by the resonance, the resonance mode is the same value
  • the first threshold value of the first characteristic coefficient corresponding to the first feature information (tone) of the first voice information may also be set.
  • a first value of the first characteristic coefficient corresponding to the first characteristic information (resonance) is a second value
  • the first The first characteristic coefficient corresponding to the feature information (resonant mode) is a third value.
  • the second threshold may be a value set for all the second feature coefficients, or may be set differently for different second feature coefficients; for example, the second feature information according to the first voice information may be set (volume level, language)
  • the second threshold value of the two second characteristic coefficients obtained by the fast speed is the same value, and the second threshold value of the second characteristic coefficient corresponding to the second feature information of the first voice information (the volume level) may be set to a fourth value.
  • the second threshold value of the second feature coefficient corresponding to the second feature information (the speed of the speech rate) is a fifth value; wherein the user can set the first threshold and the second threshold according to the actual application scenario and the implementation effect.
  • Step 312 If the first feature coefficient is less than the first threshold and the second feature coefficient is less than the second threshold, the voice information processing apparatus determines that the first voice information matches the preset voice information and acquires the preset operation authority of the preset voice information.
  • the second feature coefficient is greater than or equal to the second threshold, or the first feature coefficient of the first voice information is greater than or equal to the first threshold and the second feature coefficient is greater than or equal to
  • the second threshold value is that the first voice information does not match the preset voice information, and the operation corresponding to the first voice information is not performed.
  • the semantics of the first voice information may be matched with the semantics of the preset voice information to enhance the verification process. safety.
  • step 312 can be implemented in the following specific manner:
  • Step 312a If the first feature coefficient is smaller than the first threshold and the second feature coefficient is smaller than the second threshold, the voice information processing apparatus determines that the first voice information matches the preset voice information and acquires the preset operation authority of the preset voice information.
  • the preset operation authority may be a range in which different users can operate on the smart device based on the function setting of the smart device, and the safety factor of the operation is improved; the preset operation authority may be preset and stored in the smart device by the user. middle.
  • Step 312b The voice information processing apparatus identifies the first voice information, and obtains a first operation corresponding to the first voice information.
  • the first voice information may be semantically recognized to obtain a first operation corresponding to the first voice information; wherein the first operation may be an operation that the user desires the smart device to perform, and the implementation method of the semantic recognition may refer to the implementation of the prior art. Way, no more details here.
  • Step 312c The voice information processing apparatus determines whether the first operation is in the preset operation authority, and if the first operation is in the preset operation authority, performing the first operation.
  • determining whether the first operation is in the preset operation authority may be implemented by determining whether the operation corresponding to the first voice information can find the same operation in the preset operation range, if the first voice information corresponds to The operation is the same as at least one of the preset operation ranges, and the smart device responds and performs the first operation.
  • the voice information processing party The law also includes:
  • Step 313 The voice information processing apparatus performs spectrum analysis on the first time domain waveform of the first voice information, and acquires the frequency of the first voice information.
  • Step 314 The voice information processing apparatus determines whether the frequency of the first voice information is within a preset frequency range.
  • the preset frequency may be set according to different sound frequencies corresponding to different age stages of the user.
  • the preset frequency range may be set to a sound frequency range corresponding to the minor; for example, the male voice frequency is taken as an example for description: the sound frequency before the variable sound period (minor) is 174.614 Hz to 184.997 Hz, After the sound is changed (adult), the sound frequency is 87.307 Hz to 92.499 Hz.
  • Step 315 If the frequency of the first voice information is within the preset frequency range, the voice information processing apparatus sets the operation authority of the user corresponding to the first voice information.
  • the frequency range of the first voice information is within the preset frequency range, indicating that the user who sends the voice information at this time is a minor, it is necessary to limit the function that the minor can use the smart device. For example, you can set to disable the smart device or limit certain features of the smart device that cannot be used, such as smart relays that stop powering the power outlet, cannot use the premium channel of the smart TV, or cannot use the gaming features of the smart device; when the first voice message When the frequency is within the preset frequency range, the operation of the first operation corresponding to the first voice information in the limited function range is not performed; when the frequency of the first voice information is outside the preset range, the first voice information and the pre-determination are determined. The relationship between the voice information is set, and the subsequent processing flow is performed according to the relationship between the first voice information and the preset voice information.
  • the preset frequency range may also be a sound frequency range corresponding to an adult. If the frequency of the first voice information is outside the preset frequency range, the voice information processing apparatus sets the first. The user's operation authority corresponding to the voice information. The setting of the preset frequency range may be performed according to the specific needs and wishes of the user, or may be set when the smart device is shipped from the factory.
  • the first feature information in all the embodiments of the present invention may be physical feature information of the sound
  • the first feature coefficient may be a physical feature coefficient corresponding to the physical feature information
  • the second feature information may be a behavior feature information of the sound
  • the second feature coefficient It may be a behavior characteristic coefficient corresponding to the behavior characteristic information.
  • the voice information processing method provided by the embodiment of the present invention can obtain the first voice information, and then analyze and process the first voice information to obtain first feature information and second feature information of the first voice information, and based on the first voice.
  • the first feature information and the second feature information of the information determine a relationship between the first voice information and the preset voice information, and finally determine, according to the determination result, whether to perform an operation corresponding to the first voice information; thus, performing user voice information
  • the first feature information and the second feature information of the user voice information can be simultaneously recognized to identify the relationship between the user voice information and the preset voice information, which solves the problem that the target user cannot be accurately identified according to the voice information in the prior art.
  • the embodiment of the present invention provides a voice information processing apparatus 4, which can be applied to a voice information processing method according to the embodiment of the present invention.
  • the apparatus includes: a first acquiring unit 41. a second obtaining unit 42 and a first processing unit 43, wherein:
  • the first obtaining unit 41 is configured to acquire first voice information.
  • the second obtaining unit 42 is configured to perform analysis processing on the first voice information to obtain first feature information and second feature information of the first voice information.
  • the first processing unit 43 is configured to determine, according to the first feature information and the second feature information of the first voice information, a relationship between the first voice information and the preset voice information, and determine whether to perform the first voice according to the determination result. The operation corresponding to the information.
  • the voice information processing apparatus is capable of acquiring the first voice information, and then performing analysis processing on the first voice information to obtain first feature information and second feature information of the first voice information, and based on the first voice
  • the first feature information and the second feature information of the information determine a relationship between the first voice information and the preset voice information, and finally determine, according to the determination result, whether to perform an operation corresponding to the first voice information; thus, performing user voice information
  • the first feature information and the second feature information of the user voice information can be simultaneously recognized to identify the relationship between the user voice information and the preset voice information, which solves the problem that the target user cannot be accurately identified according to the voice information in the prior art. It can accurately identify the target user and accurately match the user's operation authority, avoiding the situation that some users' operations are not within the actual operation authority, and improving the interaction ability between the user and the device.
  • the apparatus further includes: a third obtaining unit 44, a first determining unit 45, and a second processing unit 46, wherein:
  • the third obtaining unit 44 is configured to acquire a first time domain waveform corresponding to the first voice information.
  • the first determining unit 45 is configured to determine whether the first time domain waveform of the first voice information is continuous.
  • the second processing unit 46 is configured to perform analysis processing on the first voice information to obtain first feature information and second feature information of the first voice information, if the first time domain waveform of the first voice information is continuous.
  • the second processing unit 46 is further configured to reacquire the first voice information if the first time domain waveform of the first voice information is discontinuous.
  • the second obtaining unit 42 includes: a first obtaining module 421 and a second acquiring module 422, where:
  • the first obtaining module 421 is configured to perform spectrum analysis on the first time domain waveform of the first voice information to obtain a frequency domain waveform of the first voice information.
  • the first obtaining module 421 is further configured to acquire first feature information of the first voice information according to the frequency domain waveform of the first voice information.
  • the second obtaining module 422 is configured to filter the first time domain waveform of the first voice information and adopt a delay compensation
  • the compensation mechanism performs processing to obtain a second time domain waveform of the first voice information.
  • the second obtaining module 422 is further configured to acquire second feature information of the first voice information according to the second time domain waveform of the first voice information.
  • the first processing unit 43 includes: a third obtaining module 431, a determining module 432, and a processing module 433, where:
  • the third obtaining module 431 is configured to analyze a relationship between the first feature information of the first voice information and the first feature information of the preset voice information, to obtain a first feature coefficient of the first voice information.
  • the third obtaining module 431 is further configured to analyze a relationship between the second feature information of the first voice information and the second feature information of the preset voice information, to obtain a second feature coefficient of the first voice information.
  • the determining module 432 is configured to determine whether the first feature coefficient is less than the first threshold and the second feature coefficient is less than the second threshold.
  • the processing module 433 is configured to: if the first feature coefficient is less than the first threshold and the second feature coefficient is less than the second threshold, the first voice information matches the preset voice information and performs an operation corresponding to the first voice information.
  • processing module 433 is specifically configured to perform the following steps:
  • the first voice information matches the preset voice information and acquires a preset operation authority of the preset voice information.
  • the apparatus further includes: a fourth obtaining unit 47, a second determining unit 48, and a setting unit 49, wherein:
  • the fourth obtaining unit 47 is configured to perform spectrum analysis on the first time domain waveform of the first voice information, and acquire the frequency of the first voice information.
  • the second determining unit 48 is configured to determine whether the frequency of the first voice information is within a preset frequency range.
  • the setting unit 49 is configured to set an operation authority of the user corresponding to the first voice information if the frequency of the first voice information is within the preset frequency range.
  • the voice information processing apparatus is capable of acquiring the first voice information, and then performing analysis processing on the first voice information to obtain first feature information and second feature information of the first voice information, and based on the first voice
  • the first feature information and the second feature information of the information determine a relationship between the first voice information and the preset voice information, and finally determine, according to the determination result, whether to perform an operation corresponding to the first voice information; thus, performing user voice information
  • the first feature information and the second feature information of the user voice information can be simultaneously recognized to identify the relationship between the user voice information and the preset voice information, which solves the problem that the target user cannot be accurately identified according to the voice information in the prior art.
  • the unit 48, the setting unit 49, the first obtaining module 421, the second obtaining module 422, the third obtaining module 431, the judging module 432, and the processing module 433 may each be a central processing unit (CPU) located in the wireless data transmitting device. ), a microprocessor (Micro Processor Unit, MPU), a digital signal processor (DSP), or a Field Programmable Gate Array (FPGA).
  • CPU central processing unit
  • MPU Micro Processor Unit
  • DSP digital signal processor
  • FPGA Field Programmable Gate Array
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.
  • the present invention relates to a voice processing technology in the field of communications, which solves the problem that the target user cannot accurately identify the target user according to the voice information in the prior art, can accurately identify the target user and accurately match the operation authority of the user, and avoids that some user operations are not actually operated. The situation within the permissions occurs, improving the interaction between the user and the device.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Business, Economics & Management (AREA)
  • Game Theory and Decision Science (AREA)
  • Telephone Function (AREA)
  • Telephonic Communication Services (AREA)

Abstract

一种语音信息处理方法和装置,该方法包括:获取第一语音信息(101);对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息(102);基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与第一语音信息对应的操作(103)。解决了现有技术中无法根据语音信息准确识别目标用户的问题,能够准确识别目标用户并精确匹配用户的操作权限,避免某些用户的操作不在实际操作权限内的情况发生,提高了用户与设备之间的交互能力。

Description

一种语音信息处理方法和装置 技术领域
本发明涉及通信领域中的语音处理技术,尤其涉及一种语音信息处理方法和装置。
背景技术
随着互联网技术的高速发展,无线网络在各行各业得到了广泛的应用,随之语音服务也应运而生。基于语音交互体验的便捷性,语音技术广泛应用到人们的工作、娱乐、运动等生活方式中。例如,Google在其谷歌文档(Google Docs)应用中集成新款语音听写工具,实现了用户能够摆脱传统的键盘输入的人机交互;微软和苹果也分别将各自手持终端设备上的语音产品Cortana和Siri集成到各自的电脑系统中;甚至,一些智能手机或可穿戴设备也可以通过语音技术与终端设备交互。现有的语音识别技术主要是:通过本地或云端将使用者的声音信息中包含的语言信息转化为文本与采样数据中相应的文本进行比对,同时将两段声音进行频率共振的比对,以达到区别不同用户的目的。
但是,现有的语音识别技术只是将用户声音中的某些“物理特征”进行了一些体验式的鉴别,没有考虑用户的说话习惯、心情等因素造成的声音频率变化,导致与采样的用户声音频率误差较大,不能准确识别目标用户,会出现某些用户的操作不在实际操作权限内的情况,导致用户体验效果较差。
发明内容
为解决上述技术问题,本发明实施例提供一种语音信息处理方法和装置,至少部分解决了现有技术中根据语音信息无法准确识别目标用户的问题。
为达到上述目的,本发明实施例的技术方案是这样实现的:
一种语音信息处理方法,所述方法包括:
获取第一语音信息;
对所述第一语音信息进行分析处理,得到所述第一语音信息的第一特征信息和第二特征信息;
基于所述第一语音信息的第一特征信息和第二特征信息,判断所述第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与所述第一语音信息对应的操作。
可选的,所述对所述第一语音信息进行分析,得到所述第一语音信息的第一特征信息和第二特征信息之前,还包括:
获取所述第一语音信息对应的第一时域波形;
判断所述第一语音信息的所述第一时域波形是否是连续的;
如果所述第一语音信息的所述第一时域波形是连续的,则执行所述对所述第一语音信息进行分析处理,得到所述第一语音信息的第一特征信息和第二特征信息;
如果所述第一语音信息的所述第一时域波形是不连续的,则重新获取所述第一语音信息。
可选的,所述对所述第一语音信息进行分析处理,得到所述第一语音信息的第一特征信息和第二特征信息,包括:
对所述第一语音信息的第一时域波形进行频谱分析,得到所述第一语音信息的频率域波形;
根据所述第一语音信息的所述频率域波形,获取所述第一语音信息的所述第一特征信息;
对所述第一语音信息的第一时域波形进行过滤并采用延时补偿机制进行处理,得到所述第一语音信息的第二时域波形;
根据所述第一语音信息的第二时域波形,获取所述第一语音信息的所述第二特征信息。
可选的,所述基于所述第一语音信息的第一特征信息和第二特征信息,判断所述第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与所述第一语音信息对应的操作,包括:
分析所述第一语音信息的第一特征信息与所述预设语音信息的第一特征信息之间的关系,得到所述第一语音信息的第一特征系数;
分析所述第一语音信息的第二特征信息与所述预设语音信息的第二特征信息之间的关系,得到所述第一语音信息的第二特征系数;
判断所述第一特征系数是否小于第一阈值且所述第二特征系数是否小于第二阈值;
如果所述第一特征系数小于所述第一阈值且所述第二特征系数小于所述第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并执行与所述第一语音信息对应的操作。
可选的,所述如果所述第一特征系数小于第一阈值且所述第二特征系数小于第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并执行与所述第一语音信息对应的操作,包括:
如果所述第一特征系数小于第一阈值且所述第二特征系数小于第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并获取所述预设语音信息的预设操作权限;
识别所述第一语音信息,得到所述第一语音信息对应的第一操作;
判断所述第一操作是否在所述预设操作权限中,若所述第一操作在所述预设操作权限中,则执行所述第一操作。
可选的,所述方法还包括:
对所述第一语音信息的第一时域波形进行频谱分析,获取所述第一语音信息的频率;
判断所述第一语音信息的频率是否在预设频率范围内;
如果所述第一语音信息的频率在所述预设频率范围内,则设置所述第一语音信息对应的用户的操作权限。
一种语音信息处理装置,所述装置包括:第一获取单元、第二获取单元和第一处理单元;其中:
所述第一获取单元,用于获取第一语音信息;
所述第二获取单元,用于对所述第一语音信息进行分析处理,得到所述第一语音信息的第一特征信息和第二特征信息;
所述第一处理单元,用于基于所述第一语音信息的第一特征信息和第二特征信息,判断所述第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与所述第一语音信息对应的操作。
可选的,所述装置还包括:第三获取单元、第一判断单元和第二处理单元;其中:
所述第三获取单元,用于获取所述第一语音信息对应的第一时域波形;
所述第一判断单元,用于判断所述第一语音信息的所述第一时域波形是否是连续的;
所述第二处理单元,用于如果所述第一语音信息的所述第一时域波形是连续的,则执行所述对所述第一语音信息进行分析处理,得到所述第一语音信息的所述第一特征信息和所述第二特征信息;
所述第二处理单元,还用于如果所述第一语音信息的所述第一时域波形是不连续的,则重新获取所述第一语音信息。
可选的,所述第二获取单元包括:第一获取模块和第二获取模块;其中:
所述第一获取模块,用于对所述第一语音信息的第一时域波形进行频谱分析,得到所述第一语音信息的频率域波形;
所述第一获取模块,还用于根据所述第一语音信息的所述频率域波形,获取所述第一语音信息的所述第一特征信息;
所述第二获取模块,用于对所述第一语音信息的第一时域波形进行过滤并采用延时补偿机制进行处理,得到所述第一语音信息的第二时域波形;
所述第二获取模块,还用于根据所述第一语音信息的第二时域波形,获取所述第一语音信息的所述第二特征信息。
可选的,所述第一处理单元包括:第三获取模块、判断模块和处理模块;其中:
所述第三获取模块,用于分析所述第一语音信息的第一特征信息与所述预设语音信息的第一特征信息之间的关系,得到所述第一语音信息的第一特征系数;
所述第三获取模块,还用于分析所述第一语音信息的第二特征信息与所述预设语音信息的第二特征信息之间的关系,得到所述第一语音信息的第二特征系数;
所述判断模块,用于判断所述第一特征系数是否小于第一阈值且所述第二特征系数是否小于第二阈值;
所述处理模块,用于如果所述第一特征系数小于所述第一阈值且所述第二特征系数小于所述第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并执行与所述第一语音信息对应的操作。
可选的,所述处理模块具体还用于:
如果所述第一特征系数小于第一阈值且所述第二特征系数小于第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并获取所述预设语音信息的预设操作权限;
识别所述第一语音信息,得到所述第一语音信息对应的第一操作;
判断所述第一操作是否在所述预设操作权限中,若所述第一操作在所述预设操作权限中,则执行所述第一操作。
可选的,所述装置还包括:第四获取单元、第二判断单元和设置单元;其中,
所述第四获取单元,用于对所述第一语音信息的第一时域波形进行频谱分析,获取所述第一语音信息的频率;
所述第二判断单元,用于判断所述第一语音信息的频率是否在预设频率范围内;
所述设置单元,用于如果所述第一语音信息的频率在所述预设频率范围内,则设置所述第一语音信息对应的用户的操作权限。
本发明实施例所提供的语音信息处理方法和装置,能够获取第一语音信息,之后对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息,并基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,最后根据判断结果确定是否执行与第一语音信息对应的操作;这样,在进行用户语音信息识别时,可以同时考虑用户语音信息的第一特征信息和第二特征信息识别用户语音信息与预设语音信息之间的关系,解决了现有技术中无法根据语音信息准确识别目标用户的问题,能够准确识别目标用户并精确匹配用户的操作权限,避免某些用户的操作不在实际操作权限内的情况发生,提高了用户与设备之间的交互能力。
附图说明
图1为本发明实施例提供的一种语音信息处理方法的流程示意图;
图2为本发明实施例提供的另一种语音信息处理方法的流程示意图;
图3为本发明实施例提供的又一种语音信息处理方法的流程示意图;
图4为本发明实施例提供的再一种语音信息处理方法的流程示意图;
图5为本发明实施例提供的一种语音信息处理装置的结构示意图;
图6为本发明实施例提供的另一种语音信息处理装置的结构示意图;
图7为本发明实施例提供的又一种语音信息处理装置的结构示意图;
图8为本发明另一实施例提供的一种语音信息处理装置的结构示意图;
图9为本发明另一实施例提供的另一种语音信息处理装置的结构示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
本发明实施例提供一种语音信息处理方法,参照图1所示,该方法包括以下步骤:
步骤101、获取第一语音信息。
具体的,步骤101获取第一语音信息可以由语音信息处理装置来实现。语音信息处理装置可以是能够进行语音识别并执行对应的操作的智能手机、导航仪、平板电脑、智能电视、智能冰箱、智能继电器、空调等智能设备;第一语音信息可以是用户发送的能够控制智能设备执行相关操作的实时语音信息,第一语音信息可以是从用户开始说话进行采集,在用户停止说话超过一个时间段后停止采集,该时间段可以是用户根据自己的意愿进行设置的,例如可以是5秒。
步骤102、对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息。
具体的,步骤102对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息可以由语音信息处理装置来实现。第一特征信息可以包括声音的音色、共振、谐振方式等声音的物理特征信息,第二特征信息可以包括声音的音量高低、语速快慢等声音的行为特征信息。
步骤103、基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与第一语音信息对应的操作。
具体的,步骤103基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与第一语音信息对应的操作可以由语音信息处理装置来实现。比较第一语音信息的第一特征信息与预设语音信息的第一特征信息之间的关系,同时比较第一语音信息的第二特征信息与预设语音信息的第二特征信息之间的关系,判断第一语音信息与预设语音信息是否匹配;若第一语音信息与预设语音信息匹配,执行与第一语音信息对应的操作;若第一语音信息与预设语音信息不匹配,则不执行第一语音信息对应的操作,可以发出对应 的提示语音,例如“您无操作权限”等。
本发明实施例所提供的语音信息处理方法,能够获取第一语音信息,之后对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息,并基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,最后根据判断结果确定是否执行与第一语音信息对应的操作;这样,在进行用户语音信息识别时,可以同时考虑用户语音信息的第一特征信息和第二特征信息识别用户语音信息与预设语音信息之间的关系,解决了现有技术中无法根据语音信息准确识别目标用户的问题,能够准确识别目标用户并精确匹配用户的操作权限,避免某些用户的操作不在实际操作权限内的情况发生,提高了用户与设备之间的交互能力。
本发明实施例提供一种语音信息处理方法,参照图2所示,该方法包括以下步骤:
步骤201、语音信息处理装置获取第一语音信息。
步骤202、语音信息处理装置获取第一语音信息对应的第一时域波形。
具体的,第一语音信息对应的第一时域波形是采集到的第一语音信息未经处理的原始波形。
步骤203、语音信息处理装置判断第一语音信息的第一时域波形是否是连续的。
具体的,判断第一语音信息,即用户发送的实时语音信息的时域波形在接收时间内是否是连续的。其中,用户发送的实时语音信息的时域波形在接收时间内一直是连续的信号,而录制的语音信息的时域波形是经过数字设备采样获得的,其对应的时域波形在接收时间内的时域波形整体上是不连续的。
其中,步骤203判断第一语音信息的第一时域波形是否是连续的,可以选择执行步骤204或者步骤205~206,若第一语音信息的第一时域波形是不连续的执行步骤204,若第一语音信息的第一时域波形是连续的执行步骤205~206;
步骤204、如果第一语音信息的第一时域波形是不连续的,则语音信息处理装置重新获取第一语音信息。
具体的,如果第一语音信息的第一时域波形是不连续的,表明当前获取到的第一语音信息是录制的语音信息,此时可以直接删除该第一语音信息,重新获取语音信息,避免出现误操作的情况。
步骤205、如果第一语音信息的第一时域波形是连续的,则语音信息处理装置对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息。
具体的,如果第一语音信息的第一时域波形是连续的,表明第一语音信息是用户发送的实时语音信息,此时可以对第一语音信息的第一时域波形进行频谱分析得到第一语音信息的频率域波形,从第一语音信息的频率域波形中获取第一语音信息的音色、共振、谐振方式等第一特征信息;同时从第一语音信息的第一时域波形中可以获取第一语音信息的音量高低、语速快慢等第二特征信息。
步骤206、语音信息处理装置基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与第一语音信息对应的操作。
具体的,预设语音信息是提前录制采样并保存在智能设备本地系统或对应的云端中的至少一个用户语音信息;在本地系统中对预设语音信息进行录制采样时,在尽可能保证预设语音信息的音频质量的前提下可以采用高采样文件的压缩率,以降低用户的网络使用费用;预设语音信息存储在云端中可以是通过智能设备采用无线网络与云端进行通信,将本地系统中得到的预设语音信息存储到云端中实现的。
需要说明的是,本实施例中与其它实施例中相同步骤或者概念的解释,可以参照其它实施例中的描述,此处不再赘述。
本发明实施例所提供的语音信息处理方法,能够获取第一语音信息,之后对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息,并基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,最后根据判断结果确定是否执行与第一语音信息对应的操作;这样,在进行用户语音信息识别时,可以同时考虑用户语音信息的第一特征信息和第二特征信息识别用户语音信息与预设语音信息之间的关系,解决了现有技术中无法根据语音信息准确识别目标用户的问题,能够准确识别目标用户并精确匹配用户的操作权限,避免某些用户的操作不在实际操作权限内的情况发生,提高了用户与设备之间的交互能力。进而,降低了语音识别过程中存在识别录制的语音信息导致错误操作而给用户的生命财产安全造成不必要的损失的风险。
本发明实施例提供一种语音信息处理方法,参照图3所示,该方法包括以下步骤:
步骤301、语音信息处理装置获取第一语音信息。
步骤302、语音信息处理装置获取第一语音信息对应的第一时域波形。
步骤303、语音信息处理装置判断第一语音信息的第一时域波形是否是连续的。
其中,步骤303语音信息处理装置判断第一语音信息的第一时域波形是否是连续的,可以选择执行步骤304或者步骤305~312,若第一语音信息的第一时域波形是不连续的执行步骤304,若第一语音信息的第一时域波形是连续的执行步骤305~312;
步骤304、如果第一语音信息的第一时域波形是不连续的,则语音信息处理装置重新获取第一语音信息。
步骤305、如果第一语音信息的第一时域波形是连续的,则语音信息处理装置对第一语音信息的第一时域波形进行频谱分析,得到第一语音信息的频率域波形。
具体的,可以采用傅里叶变换方法对第一语音信息的第一时域波形进行频谱分析,得到第一语音信息的频率域波形。
步骤306、语音信息处理装置根据第一语音信息的频率域波形,获取第一语音信息的第一特征信息。
具体的,可以对第一语音信息的频率域波形进行第一特征分析,得到第一语音信息的第一特征信息,其中,对频率域波形进行分析得到第一特征信息的具体实现方法可以参照现有技术的实现方式,此处不再赘述。
步骤307、语音信息处理装置对第一语音信息的第一时域波形进行过滤并采用延时补偿机制进行处理,得到第一语音信息的第二时域波形。
具体的,可以对第一语音信息的第一时域波形的首尾空白信号进行过滤,然后采用延时补偿机制对过滤处理后的第一语音信息的第一时域波形进行处理,得到第二语音信息的第二时域波形;其中,第二时域波形与预设语音信息的时域波形从波形分布、波峰波谷间距、时间戳等能够达到动态一致。
步骤308、语音信息处理装置根据第一语音信息的第二时域波形,获取第一语音信息的第二特征信息。
具体的,对第一语音信息的第二时域波形进行第二特征分析,得到第一语音信息的第二特征信息,其中,对时域波形进行分析得到第二特征信息的实现方法可以参照现有技术的实现方式,此处不再赘述。
步骤309、语音信息处理装置分析第一语音信息的第一特征信息与预设语音信息的第一特征信息之间的关系,得到第一语音信息的第一特征系数。
具体的,可以将第一语音信息的第一特征信息与预设语音信息的第一特征信息进行相减并取绝对值得到第一语音信息的第一特征系数,当然,分析第一语音信息的第一特征信息与预设语音信息的第一特征信息之间的关系还可以采用现有技术中所采取的其他方法,并不局限于本发明提出的实现方式。其中,预设语音信息的第一特征信息获取方法可以与第一语音信息的第一特征信息获取方法一致。
步骤310、语音信息处理装置分析第一语音信息的第二特征信息与预设语音信息的第二特征信息之间的关系,得到第一语音信息的第二特征系数。
具体的,可以将第一语音信息的第二特征信息与预设语音信息的第二特征信息进行相减并取绝对值得到第一语音信息的第二特征系数,当然,分析第一语音信息的第二特征信息与预设语音信息的第二特征信息之间的关系还可以采用现有技术中所采取的其他方法,并不局限于本发明提出的实现方式。其中,预设语音信息的第二特征信息获取方法可以与第一语音信息的第二特征信息获取方法一致。
步骤311、语音信息处理装置判断第一特征系数是否小于第一阈值且第二特征系数是否小于第二阈值。
具体的,第一阈值可以是针对所有第一特征系数设置的一个数值,也可以是针对不同的第一特征系数对应设置不同的数值,例如可以设置根据第一语音信息的第一特征信息(音色、共振、谐振方式)得到的三个第一特征系数的第一阈值为同一个数值,也可以设置第一语音信息的第一特征信息(音色)对应的第一特征系数的第一阈值为第一数值、第一特征信息(共振)对应的第一特征系数的第一阈值为第二数值、第一 特征信息(谐振方式)对应的第一特征系数为第三数值。第二阈值可以是针对所有第二特征系数设置的一个数值,也可以是针对不同的第二特征系数对应设置不同的数值;例如可以设置根据第一语音信息的第二特征信息(音量高低、语速快慢)得到的两个第二特征系数的第二阈值为同一个数值,也可以设置第一语音信息的第二特征信息(音量高低)对应的第二特征系数的第二阈值为第四数值、第二特征信息(语速快慢)对应的第二特征系数的第二阈值为第五数值;其中,用户可以根据实际的应用场景和实现效果设置第一阈值和第二阈值。
步骤312、如果第一特征系数小于第一阈值且第二特征系数小于第二阈值,则语音信息处理装置确定第一语音信息与预设语音信息匹配并获取预设语音信息的预设操作权限。
具体的,如果第一语音信息的第一特征系数大于等于第一阈值,第二特征系数大于等于第二阈值或者第一语音信息的第一特征系数大于等于第一阈值且第二特征系数大于等于第二阈值,则认为第一语音信息与预设语音信息不匹配,不执行第一语音信息对应的操作。在使用过程中,经过判断后确定第一特征系数小于第一阈值且第二特征系数小于第二阈值时,可以将第一语音信息的语义与预设语音信息的语义进行匹配,加强验证过程保证安全性。
需要说明的是,步骤312可以通过以下具体方式来实现:
步骤312a、如果第一特征系数小于第一阈值且第二特征系数小于第二阈值,则语音信息处理装置确定第一语音信息与预设语音信息匹配并获取预设语音信息的预设操作权限。
具体的,预设操作权限可以是基于智能设备的功能设置的不同的用户可以对智能设备进行操作的范围,提高了操作的安全系数;该预设操作权限可以是用户预先设置并存储在智能设备中的。
步骤312b、语音信息处理装置识别第一语音信息,得到第一语音信息对应的第一操作。
具体的,可以对第一语音信息进行语义识别得到第一语音信息对应的第一操作;其中,第一操作可以是用户希望智能设备执行的操作,语义识别的实现方法可以参照现有技术的实现方式,此处不再赘述。
步骤312c、语音信息处理装置判断第一操作是否在预设操作权限中,若第一操作在预设操作权限中,则执行第一操作。
具体的,判断第一操作是否在预设操作权限中可以是通过判断第一语音信息对应的操作能否在预设的操作范围中找到与之相同的操作来实现的,如果第一语音信息对应的操作与预设的操作范围中的至少一个操作相同,则智能设备响应并执行第一操作。
基于上述实施例,参照图4所示,在本发明的其他实施例中,该语音信息处理方 法还包括:
步骤313、语音信息处理装置对第一语音信息的第一时域波形进行频谱分析,获取第一语音信息的频率。
步骤314、语音信息处理装置判断第一语音信息的频率是否在预设频率范围内。
具体的,预设频率可以根据用户不同年龄阶段对应的声音频率不同进行设定。在本实施例中,可以设置预设频率范围为未成年人对应的声音频率范围;例如,以男性声音频率为例进行说明:变声期前(未成年)的声音频率为174.614Hz~184.997Hz,变声后(成年)的声音频率为87.307Hz~92.499Hz。
步骤315、如果第一语音信息的频率在预设频率范围内,则语音信息处理装置设置第一语音信息对应的用户的操作权限。
具体的,如果第一语音信息的频率范围在预设频率范围内说明此时发送语音信息的用户为未成年人,需要限定未成年人能够使用智能设备的功能。例如,可以设置停用智能设备或者限制智能设备的某些功能不能使用,如智能继电器对电源插座停止供电、不能使用智能电视的收费频道或者不能使用智能设备的游戏功能;当第一语音信息的频率在预设频率范围内时,不执行第一语音信息对应的第一操作在限制功能范围内的操作;当第一语音信息的频率在预设范围外时,则判断第一语音信息与预设语音信息之间的关系,并根据第一语音信息与预设语音信息之间的关系执行后续的处理流程。
需要说明的是,在本发明其它实施例中,预设频率范围也可以是成年人对应的声音频率范围,如果第一语音信息的频率在预设频率范围外,则语音信息处理装置设置第一语音信息对应的用户的操作权限。预设频率范围的设置可以是根据用户的具体需求和意愿进行,也可以是在智能设备出厂时设置完成的。本发明所有实施例中的第一特征信息可以是声音的物理特征信息,第一特征系数可以是物理特征信息对应的物理特征系数;第二特征信息可以是声音的行为特征信息,第二特征系数可以是行为特征信息对应的行为特征系数。
需要说明的是,本实施例中与其它实施例中相同步骤或者概念的解释,可以参照其它实施例中的描述,此处不再赘述。
本发明实施例所提供的语音信息处理方法,能够获取第一语音信息,之后对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息,并基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,最后根据判断结果确定是否执行与第一语音信息对应的操作;这样,在进行用户语音信息识别时,可以同时考虑用户语音信息的第一特征信息和第二特征信息识别用户语音信息与预设语音信息之间的关系,解决了现有技术中无法根据语音信息准确识别目标用户的问题,能够准确识别目标用户并精确匹配用户的操作权限,避免某些用户的操作不在实际操作权限内的情况发生,提高了用户与设备之间的交互能 力。进而,降低了语音识别过程中存在识别录制的语音信息导致错误操作而给用户的生命财产安全造成不必要的损失的风险。
本发明实施例提供了一种语音信息处理装置4,可应用于图1~4对应的实施例提供的一种语音信息处理方法中,参照图5所示,该装置包括:第一获取单元41、第二获取单元42和第一处理单元43,其中:
第一获取单元41,用于获取第一语音信息。
第二获取单元42,用于对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息。
第一处理单元43,用于基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与第一语音信息对应的操作。
本发明实施例所提供的语音信息处理装置,能够获取第一语音信息,之后对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息,并基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,最后根据判断结果确定是否执行与第一语音信息对应的操作;这样,在进行用户语音信息识别时,可以同时考虑用户语音信息的第一特征信息和第二特征信息识别用户语音信息与预设语音信息之间的关系,解决了现有技术中无法根据语音信息准确识别目标用户的问题,能够准确识别目标用户并精确匹配用户的操作权限,避免某些用户的操作不在实际操作权限内的情况发生,提高了用户与设备之间的交互能力。
具体的,参照图6所示,该装置还包括:第三获取单元44、第一判断单元45和第二处理单元46,其中:
第三获取单元44,用于获取第一语音信息对应的第一时域波形。
第一判断单元45,用于判断第一语音信息的第一时域波形是否是连续的。
第二处理单元46,用于如果第一语音信息的第一时域波形是连续的,则执行对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息。
第二处理单元46,还用于如果第一语音信息的第一时域波形是不连续的,则重新获取第一语音信息。
具体的,参照图7所示,第二获取单元42包括:第一获取模块421和第二获取模块422,其中:
第一获取模块421,用于对第一语音信息的第一时域波形进行频谱分析,得到第一语音信息的频率域波形。
第一获取模块421,还用于根据第一语音信息的频率域波形,获取第一语音信息的第一特征信息。
第二获取模块422,用于对第一语音信息的第一时域波形进行过滤并采用延时补 偿机制进行处理,得到第一语音信息的第二时域波形。
第二获取模块422,还用于根据第一语音信息的第二时域波形,获取第一语音信息的第二特征信息。
具体的,参照图8所示,第一处理单元43包括:第三获取模块431、判断模块432和处理模块433,其中:
第三获取模块431,用于分析第一语音信息的第一特征信息与预设语音信息的第一特征信息之间的关系,得到第一语音信息的第一特征系数。
第三获取模块431,还用于分析第一语音信息的第二特征信息与预设语音信息的第二特征信息之间的关系,得到第一语音信息的第二特征系数。
判断模块432,用于判断第一特征系数是否小于第一阈值且第二特征系数是否小于第二阈值。
处理模块433,用于如果第一特征系数小于第一阈值且第二特征系数小于第二阈值,则第一语音信息与预设语音信息匹配并执行与第一语音信息对应的操作。
具体可选的,处理模块433具体用于执行以下步骤:
如果第一特征系数小于第一阈值且第二特征系数小于第二阈值,则第一语音信息与预设语音信息匹配并获取预设语音信息的预设操作权限。
识别第一语音信息,得到第一语音信息对应的第一操作。
判断第一操作是否在预设操作权限中,若第一操作在预设操作权限中,则执行第一操作。
具体的,参照图9所示,该装置还包括:第四获取单元47、第二判断单元48和设置单元49,其中:
第四获取单元47,用于对第一语音信息的第一时域波形进行频谱分析,获取第一语音信息的频率。
第二判断单元48,用于判断第一语音信息的频率是否在预设频率范围内。
设置单元49,用于如果第一语音信息的频率在预设频率范围内,则设置第一语音信息对应的用户的操作权限。
需要说明的是,本实施例中各个单元和模块之间的交互过程,可以参照图1~4对应的实施例提供的一种语音信息处理方法中的交互过程,此处不再赘述。
本发明实施例所提供的语音信息处理装置,能够获取第一语音信息,之后对第一语音信息进行分析处理,得到第一语音信息的第一特征信息和第二特征信息,并基于第一语音信息的第一特征信息和第二特征信息,判断第一语音信息与预设语音信息之间的关系,最后根据判断结果确定是否执行与第一语音信息对应的操作;这样,在进行用户语音信息识别时,可以同时考虑用户语音信息的第一特征信息和第二特征信息识别用户语音信息与预设语音信息之间的关系,解决了现有技术中无法根据语音信息准确识别目标用户的问题,能够准确识别目标用户并精确匹配用户的操作权限,避免 某些用户的操作不在实际操作权限内的情况发生,提高了用户与设备之间的交互能力。进而,降低了语音识别过程中存在识别录制的语音信息导致错误操作而给用户的生命财产安全造成不必要的损失的风险。
在实际应用中,第一获取单元41、第二获取单元42、第一处理单元43、第三获取单元44、第一判断单元45、第二处理单元46、第四获取单元47、第二判断单元48、设置单元49、第一获取模块421、第二获取模块422、第三获取模块431、判断模块432和处理模块433均可由位于无线数据发送设备中的中央处理器(Central Processing Unit,CPU)、微处理器(Micro Processor Unit,MPU)、数字信号处理器(Digital Signal Processor,DSP)或现场可编程门阵列(Field Programmable Gate Array,FPGA)等实现。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
工业实用性
本申请涉及通信领域中的语音处理技术,解决了现有技术中无法根据语音信息准确识别目标用户的问题,能够准确识别目标用户并精确匹配用户的操作权限,避免某些用户的操作不在实际操作权限内的情况发生,提高了用户与设备之间的交互能力。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。

Claims (12)

  1. 一种语音信息处理方法,包括:
    获取第一语音信息;
    对所述第一语音信息进行分析处理,得到所述第一语音信息的第一特征信息和第二特征信息;
    基于所述第一语音信息的第一特征信息和第二特征信息,判断所述第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与所述第一语音信息对应的操作。
  2. 根据权利要求1所述的方法,其中,所述对所述第一语音信息进行分析,得到所述第一语音信息的第一特征信息和第二特征信息之前,所述方法还包括:
    获取所述第一语音信息对应的第一时域波形;
    判断所述第一语音信息的所述第一时域波形是否是连续的;
    如果所述第一语音信息的所述第一时域波形是连续的,则执行所述对所述第一语音信息进行分析处理,得到所述第一语音信息的第一特征信息和第二特征信息;
    如果所述第一语音信息的所述第一时域波形是不连续的,则重新获取所述第一语音信息。
  3. 根据权利要求1或2所述的方法,其中,所述对所述第一语音信息进行分析处理,得到所述第一语音信息的第一特征信息和第二特征信息,包括:
    对所述第一语音信息的第一时域波形进行频谱分析,得到所述第一语音信息的频率域波形;
    根据所述第一语音信息的所述频率域波形,获取所述第一语音信息的所述第一特征信息;
    对所述第一语音信息的第一时域波形进行过滤并采用延时补偿机制进行处理,得到所述第一语音信息的第二时域波形;
    根据所述第一语音信息的第二时域波形,获取所述第一语音信息的所述第二特征信息。
  4. 根据权利要求1所述的方法,其中,所述基于所述第一语音信息的第一特征信息和第二特征信息,判断所述第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与所述第一语音信息对应的操作,包括:
    分析所述第一语音信息的第一特征信息与所述预设语音信息的第一特征信息之间的关系,得到所述第一语音信息的第一特征系数;
    分析所述第一语音信息的第二特征信息与所述预设语音信息的第二特征信息之间的关系,得到所述第一语音信息的第二特征系数;
    判断所述第一特征系数是否小于第一阈值且所述第二特征系数是否小于第二阈 值;
    如果所述第一特征系数小于所述第一阈值且所述第二特征系数小于所述第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并执行与所述第一语音信息对应的操作。
  5. 根据权利要求4所述的方法,其中,所述如果所述第一特征系数小于第一阈值且所述第二特征系数小于第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并执行与所述第一语音信息对应的操作,包括:
    如果所述第一特征系数小于第一阈值且所述第二特征系数小于第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并获取所述预设语音信息的预设操作权限;
    识别所述第一语音信息,得到所述第一语音信息对应的第一操作;
    判断所述第一操作是否在所述预设操作权限中,若所述第一操作在所述预设操作权限中,则执行所述第一操作。
  6. 根据权利要求1所述的方法,还包括:
    对所述第一语音信息的第一时域波形进行频谱分析,获取所述第一语音信息的频率;
    判断所述第一语音信息的频率是否在预设频率范围内;
    如果所述第一语音信息的频率在所述预设频率范围内,则设置所述第一语音信息对应的用户的操作权限。
  7. 一种语音信息处理装置包括:第一获取单元、第二获取单元和第一处理单元;其中:
    所述第一获取单元,设置为获取第一语音信息;
    所述第二获取单元,设置为对所述第一语音信息进行分析处理,得到所述第一语音信息的第一特征信息和第二特征信息;
    所述第一处理单元,设置为基于所述第一语音信息的第一特征信息和第二特征信息,判断所述第一语音信息与预设语音信息之间的关系,并根据判断结果确定是否执行与所述第一语音信息对应的操作。
  8. 根据权利要求7所述的装置,其中,所述装置还包括:第三获取单元、第一判断单元和第二处理单元;其中:
    所述第三获取单元,设置为获取所述第一语音信息对应的第一时域波形;
    所述第一判断单元,设置为判断所述第一语音信息的所述第一时域波形是否是连续的;
    所述第二处理单元,设置为如果所述第一语音信息的所述第一时域波形是连续的,则执行所述对所述第一语音信息进行分析处理,得到所述第一语音信息的所述第一特征信息和所述第二特征信息;
    所述第二处理单元,还设置为如果所述第一语音信息的所述第一时域波形是不连续的,则重新获取所述第一语音信息。
  9. 根据权利要求7或8所述的装置,其中,所述第二获取单元包括:第一获取模块和第二获取模块;其中:
    所述第一获取模块,设置为对所述第一语音信息的第一时域波形进行频谱分析,得到所述第一语音信息的频率域波形;
    所述第一获取模块,还设置为根据所述第一语音信息的所述频率域波形,获取所述第一语音信息的所述第一特征信息;
    所述第二获取模块,设置为对所述第一语音信息的第一时域波形进行过滤并采用延时补偿机制进行处理,得到所述第一语音信息的第二时域波形;
    所述第二获取模块,还设置为根据所述第一语音信息的第二时域波形,获取所述第一语音信息的所述第二特征信息。
  10. 根据权利要求7所述的装置,其中,所述第一处理单元包括:第三获取模块、判断模块和处理模块;其中:
    所述第三获取模块,设置为分析所述第一语音信息的第一特征信息与所述预设语音信息的第一特征信息之间的关系,得到所述第一语音信息的第一特征系数;
    所述第三获取模块,还设置为分析所述第一语音信息的第二特征信息与所述预设语音信息的第二特征信息之间的关系,得到所述第一语音信息的第二特征系数;
    所述判断模块,设置为判断所述第一特征系数是否小于第一阈值且所述第二特征系数是否小于第二阈值;
    所述处理模块,设置为如果所述第一特征系数小于所述第一阈值且所述第二特征系数小于所述第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并执行与所述第一语音信息对应的操作。
  11. 根据权利要求10所述的装置,其中,所述处理模块还设置为:
    如果所述第一特征系数小于第一阈值且所述第二特征系数小于第二阈值,则确定所述第一语音信息与所述预设语音信息匹配并获取所述预设语音信息的预设操作权限;
    识别所述第一语音信息,得到所述第一语音信息对应的第一操作;
    判断所述第一操作是否在所述预设操作权限中,若所述第一操作在所述预设操作权限中,则执行所述第一操作。
  12. 根据权利要求7所述的装置,还包括:第四获取单元、第二判断单元和设置单元;其中,
    所述第四获取单元,设置为对所述第一语音信息的第一时域波形进行频谱分析,获取所述第一语音信息的频率;
    所述第二判断单元,设置为判断所述第一语音信息的频率是否在预设频率范围 内;
    所述设置单元,设置为如果所述第一语音信息的频率在所述预设频率范围内,则设置所述第一语音信息对应的用户的操作权限。
PCT/CN2017/077537 2016-08-15 2017-03-21 一种语音信息处理方法和装置 WO2018032760A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP17840737.5A EP3499502A1 (en) 2016-08-15 2017-03-21 Voice information processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201610674393.3A CN107767860B (zh) 2016-08-15 2016-08-15 一种语音信息处理方法和装置
CN201610674393.3 2016-08-15

Publications (1)

Publication Number Publication Date
WO2018032760A1 true WO2018032760A1 (zh) 2018-02-22

Family

ID=61196313

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/077537 WO2018032760A1 (zh) 2016-08-15 2017-03-21 一种语音信息处理方法和装置

Country Status (3)

Country Link
EP (1) EP3499502A1 (zh)
CN (2) CN115719592A (zh)
WO (1) WO2018032760A1 (zh)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110415682A (zh) * 2019-07-08 2019-11-05 海尔优家智能科技(北京)有限公司 控制智能设备的方法及装置
WO2021128003A1 (zh) * 2019-12-24 2021-07-01 广州国音智能科技有限公司 一种声纹同一性鉴定方法和相关装置
CN112330897B (zh) * 2020-08-19 2023-07-25 深圳Tcl新技术有限公司 用户语音对应性别改变方法、装置、智能门铃及存储介质
CN113053388B (zh) * 2021-03-09 2023-08-01 北京百度网讯科技有限公司 语音交互方法、装置、设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852354A (zh) * 2005-10-17 2006-10-25 华为技术有限公司 收集用户行为特征的方法和装置
CN101727900A (zh) * 2009-11-24 2010-06-09 北京中星微电子有限公司 一种用户发音检测方法及设备
CN102521281A (zh) * 2011-11-25 2012-06-27 北京师范大学 一种基于最长匹配子序列算法的哼唱计算机音乐检索方法
CN103456312A (zh) * 2013-08-29 2013-12-18 太原理工大学 一种基于计算听觉场景分析的单通道语音盲分离方法
CN103886870A (zh) * 2012-12-21 2014-06-25 索尼公司 噪声检测装置、噪声检测方法和程序

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5714180B2 (ja) * 2011-05-19 2015-05-07 ドルビー ラボラトリーズ ライセンシング コーポレイション パラメトリックオーディオコーディング方式の鑑識検出
CN102915742B (zh) * 2012-10-30 2014-07-30 中国人民解放军理工大学 基于低秩与稀疏矩阵分解的单通道无监督语噪分离方法
CN103811003B (zh) * 2012-11-13 2019-09-24 联想(北京)有限公司 一种语音识别方法以及电子设备
JP6263868B2 (ja) * 2013-06-17 2018-01-24 富士通株式会社 音声処理装置、音声処理方法および音声処理プログラム
USRE49014E1 (en) * 2013-06-19 2022-04-05 Panasonic Intellectual Property Corporation Of America Voice interaction method, and device
KR101699252B1 (ko) * 2013-10-28 2017-01-24 에스케이텔레콤 주식회사 음성 인식을 위한 특징 파라미터 추출 방법 및 이를 이용하는 음성 인식 장치
CN105261375B (zh) * 2014-07-18 2018-08-31 中兴通讯股份有限公司 激活音检测的方法及装置
CN105374367B (zh) * 2014-07-29 2019-04-05 华为技术有限公司 异常帧检测方法和装置
CN105654949B (zh) * 2016-01-07 2019-05-07 北京云知声信息技术有限公司 一种语音唤醒方法及装置

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1852354A (zh) * 2005-10-17 2006-10-25 华为技术有限公司 收集用户行为特征的方法和装置
CN101727900A (zh) * 2009-11-24 2010-06-09 北京中星微电子有限公司 一种用户发音检测方法及设备
CN102521281A (zh) * 2011-11-25 2012-06-27 北京师范大学 一种基于最长匹配子序列算法的哼唱计算机音乐检索方法
CN103886870A (zh) * 2012-12-21 2014-06-25 索尼公司 噪声检测装置、噪声检测方法和程序
CN103456312A (zh) * 2013-08-29 2013-12-18 太原理工大学 一种基于计算听觉场景分析的单通道语音盲分离方法

Also Published As

Publication number Publication date
CN107767860A (zh) 2018-03-06
CN107767860B (zh) 2023-01-13
CN115719592A (zh) 2023-02-28
EP3499502A1 (en) 2019-06-19

Similar Documents

Publication Publication Date Title
US10013977B2 (en) Smart home control method based on emotion recognition and the system thereof
US9704478B1 (en) Audio output masking for improved automatic speech recognition
WO2018032760A1 (zh) 一种语音信息处理方法和装置
KR101752119B1 (ko) 다수의 디바이스에서의 핫워드 검출
US10270736B2 (en) Account adding method, terminal, server, and computer storage medium
CN103871408B (zh) 一种语音识别方法及装置、电子设备
US9905215B2 (en) Noise control method and device
US10733970B2 (en) Noise control method and device
US20180152163A1 (en) Noise control method and device
US20160019886A1 (en) Method and apparatus for recognizing whisper
WO2016023317A1 (zh) 一种语音信息的处理方法及终端
WO2014114049A1 (zh) 一种语音识别的方法、装置
US20190043509A1 (en) Audio privacy based on user identification
US9779755B1 (en) Techniques for decreasing echo and transmission periods for audio communication sessions
TW201337722A (zh) 音樂播放裝置及其控制方法
US10224029B2 (en) Method for using voiceprint identification to operate voice recognition and electronic device thereof
CN110428835B (zh) 一种语音设备的调节方法、装置、存储介质及语音设备
US20230267947A1 (en) Noise reduction using machine learning
US8868419B2 (en) Generalizing text content summary from speech content
WO2019101099A1 (zh) 视频节目识别方法、设备、终端、系统和存储介质
US11551707B2 (en) Speech processing method, information device, and computer program product
CN105551504B (zh) 一种基于哭声触发智能移动终端功能应用的方法及装置
US9626967B2 (en) Information processing method and electronic device
KR20180074152A (ko) 보안성이 강화된 음성 인식 방법 및 장치
WO2021179470A1 (zh) 一种纯语音数据采样率识别方法、装置、系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17840737

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2017840737

Country of ref document: EP

Effective date: 20190315