WO2018188591A1 - 一种语音识别方法、装置及电子设备 - Google Patents

一种语音识别方法、装置及电子设备 Download PDF

Info

Publication number
WO2018188591A1
WO2018188591A1 PCT/CN2018/082525 CN2018082525W WO2018188591A1 WO 2018188591 A1 WO2018188591 A1 WO 2018188591A1 CN 2018082525 W CN2018082525 W CN 2018082525W WO 2018188591 A1 WO2018188591 A1 WO 2018188591A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
voice
speech recognition
recognition information
determining
Prior art date
Application number
PCT/CN2018/082525
Other languages
English (en)
French (fr)
Inventor
陈君宇
贾磊
韩伟
吴震
郭启行
Original Assignee
北京猎户星空科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京猎户星空科技有限公司 filed Critical 北京猎户星空科技有限公司
Publication of WO2018188591A1 publication Critical patent/WO2018188591A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L15/18Speech classification or search using natural language modelling
    • G10L15/1822Parsing for meaning understanding
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/228Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of application context

Definitions

  • the present application relates to the field of voice recognition technology, and in particular, to a voice recognition method, device, and electronic device.
  • One is: receiving voice command information by the smart device, identifying the voice command information, obtaining the recognized command information, and responding to the recognized command information.
  • the other is: the voice instruction information is received by the smart device, and the voice execution information is sent to the cloud server, and the cloud server identifies the voice command information, obtains the recognized command information, and responds to the recognized command information. Return the response message to the smart device.
  • the purpose of the present application is to provide a voice recognition method, device and electronic device to improve the voice recognition effect of the discontinuous voice.
  • the embodiment of the present application provides a voice recognition method, which is applied to an electronic device, and the method includes:
  • the spliced speech recognition information and the current speech recognition information are spliced to obtain spliced speech recognition information
  • the spliced speech recognition information is determined as the spliced speech recognition information for saving, and the step of obtaining the to-be-identified voice information is continued.
  • the method further includes:
  • the current voice recognition information is determined as a voice recognition result
  • the current voice recognition information is determined as the voice recognition information to be stitched for saving, and the step of obtaining the voice information to be recognized is further performed.
  • the step of determining whether the spliced voice recognition information has complete semantics includes:
  • response information is prompt information that cannot provide a service, determining that the stitched speech recognition information has no complete semantics
  • the response information is not prompt information that cannot provide a service, it is determined that the stitched speech recognition information has complete semantics.
  • the meaning library is a tree structure meaning gallery
  • the step of performing semantic analysis on the spliced speech recognition information to obtain a semantic parsing result including:
  • the step of matching the semantic parsing result with the intent stored in the preset meaning gallery to obtain the user intent includes:
  • the method further includes:
  • the saved speech recognition information to be stitched is semantically parsed to obtain a semantic analysis result
  • the preset service prompt voice information corresponding to the semantic parsing result is output to the user.
  • the method further includes:
  • the voice recognition failure prompt voice information is output to the user.
  • the electronic device is a smart device
  • the step of obtaining the voice information to be identified includes:
  • the voice information input by the user is determined as the voice information to be recognized.
  • the electronic device is a cloud server that is in communication with the smart device;
  • the step of obtaining the to-be-identified voice information includes: receiving the to-be-identified voice information sent by the smart device; the to-be-identified voice information sent by the smart device is: after the smart device detects the user inputting the voice information, When the mute duration reaches the second preset duration, the voice information input by the user is determined as the to-be-identified voice information, and then sent to the cloud server.
  • the embodiment of the present application further provides a voice recognition device, which is applied to an electronic device, and the device includes:
  • An identification module configured to identify the to-be-identified voice information, and obtain current voice recognition information corresponding to the to-be-identified voice information
  • a first determining module configured to determine whether there is saved speech recognition information to be spliced
  • a splicing module configured to splicing the speech recognition information to be spliced and the current speech recognition information when the judgment result of the judging module is present, to obtain spliced speech recognition information
  • a first determining module configured to determine whether the stitched speech recognition information has complete semantics
  • a second determining module configured to determine, after the determining result of the first determining module is YES, the stitched voice recognition information as a voice recognition result
  • the third determining module is configured to: when the determining result of the first determining module is negative, determine the stitched voice recognition information as the voice recognition information to be stitched, and trigger the acquiring module.
  • the device further includes:
  • a second determining module configured to: when the first determining module determines that there is no saved speech recognition information to be spliced, determine whether the current speech recognition information has complete semantics;
  • a fourth determining module configured to determine the current voice recognition information as a voice recognition result when the determination result of the second determining module is
  • the fifth determining module is configured to: when the determination result of the second determining module is not, determine the current voice recognition information as the voice recognition information to be stitched, and trigger the acquiring module.
  • the first determining module includes:
  • a parsing unit configured to perform semantic analysis on the stitched speech recognition information to obtain a semantic parsing result
  • a matching unit configured to match the semantic parsing result with an intent stored in a preset Italian gallery to obtain a user intent
  • An obtaining unit configured to obtain response information corresponding to the user intent from the Italian library
  • a determining unit configured to determine whether the response information is prompt information that cannot provide a service
  • a first determining unit configured to: when the determining unit determines that the response information is prompt information that cannot provide a service, determining that the stitched voice recognition information has no complete semantics;
  • the second determining unit is configured to: when the determining unit determines that the response information is not the prompt information that cannot provide the service, determine that the stitched voice recognition information has complete semantics.
  • the meaning library is a tree structure meaning gallery
  • the parsing unit is configured to extract, according to a preset rule, a plurality of pieces of feature text from the voice recognition information, wherein each piece of feature text has a one-to-one correspondence with each level in a preset tree structure meaning gallery;
  • the matching unit includes:
  • a first determining subunit configured to determine the feature text corresponding to the first level as the feature text of the current level
  • a second determining subunit configured to determine all intents of the first level in the tree structure meaning library as candidate intents
  • a matching subunit configured to match the feature text of the current level with the respective candidate intents to obtain a current intent
  • a third determining subunit configured to determine a current intent as a user intent when the determining result of the determining subunit is YES;
  • a fourth determining subunit configured to: when the determining result of the determining subunit is negative, determine the feature text corresponding to the next level as the feature text of the current level; and correspondingly the current intent in the tree structure meaning gallery All intents of the next level of the determination are determined as candidate intent; the matching subunit is triggered.
  • the device further includes:
  • the parsing module is configured to perform semantic analysis on the saved speech recognition information to be stitched if the speech information to be recognized is not obtained when the first preset duration is reached, and obtain a semantic parsing result;
  • the first output module is configured to output, to the user, the preset service prompt voice information corresponding to the semantic analysis result.
  • the device further includes:
  • the second output module is configured to: if the voice information to be recognized is not obtained when the first preset duration is reached, output voice recognition failure prompt voice information to the user.
  • the electronic device is a smart device
  • the obtaining module includes:
  • a detecting unit configured to detect voice information in real time
  • the third determining unit is configured to determine the voice information input by the user as the to-be-identified voice information when the silence duration reaches the second preset duration after detecting the user inputting the voice information.
  • the electronic device is a cloud server that is in communication with the smart device;
  • the acquiring module is specifically configured to receive the to-be-identified voice information sent by the smart device; the to-be-identified voice information sent by the smart device is: when the smart device detects the user inputting the voice information, when the mute duration reaches the first When the preset time is long, the voice information input by the user is determined as the voice information to be recognized, and then sent to the cloud server.
  • the embodiment of the present application further provides an electronic device, including: a casing, a processor, a memory, a circuit board, and a power supply circuit, wherein the circuit board is disposed inside the space enclosed by the casing, the processor and the memory Provided on a circuit board; a power circuit for powering various circuits or devices of the electronic device; a memory for storing executable program code; and a processor for executing the executable program code by reading executable program code stored in the memory Corresponding program for performing the speech recognition method described.
  • the embodiment of the present application further provides a computer readable storage medium, where the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the voice recognition method is implemented.
  • the embodiment of the present application further provides an application for executing the above-described voice recognition method at runtime.
  • the voice recognition method, device and electronic device obtained by the embodiment of the present invention obtain the voice information to be recognized; identify the voice information to be recognized, obtain the current voice recognition information corresponding to the voice information to be recognized; and determine whether there is a saved waiting Stitching the speech recognition information; if present, splicing the spliced speech recognition information and the current speech recognition information to obtain the spliced speech recognition information; determining whether the spliced speech recognition information has complete semantics; if so, splicing
  • the voice recognition information is determined as a voice recognition result; if not, the stitched voice recognition information is determined as the voice recognition information to be stitched for storage, and the voice information to be recognized is continuously obtained.
  • the complete semantic determination is performed on the current voice recognition information in the absence of the saved voice recognition information to be stitched, and the saved voice recognition information to be stitched and the current voice recognition information are stitched.
  • the stitched speech recognition information is obtained and judged whether it has complete semantics. If not, the voice information is continuously obtained, and then the voice recognition information is spliced until the complete semantics are obtained.
  • the embodiment of the present application ensures the integrity of the recognized semantics and improves the voice recognition effect of the incoherent voice.
  • FIG. 1 is a flowchart of a voice recognition method according to an embodiment of the present application
  • FIG. 2 is a flowchart of determining whether a spliced voice recognition information has complete semantics according to an embodiment of the present application
  • FIG. 3 is a schematic structural diagram of a voice recognition apparatus according to an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of a first determining module according to an embodiment of the present application.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
  • the embodiment of the present application provides a voice recognition method, which may be applied to an electronic device, where the electronic device may be a smart device or a cloud server that is communicatively connected to the smart device.
  • the smart device may be a device with a voice recognition function such as a smart phone, a smart speaker, an intelligent robot, or a smart tablet.
  • FIG. 1 is a flowchart of a voice recognition method according to an embodiment of the present application, where the method includes:
  • the voice information is voice information including a voice sent by the user.
  • the electronic device can monitor the sounds around it and acquire corresponding voice information as the voice information to be recognized.
  • step S110 may include:
  • A2 After detecting the user inputting the voice information, when the mute duration reaches the second preset duration, the voice information input by the user is determined as the voice information to be recognized.
  • the smart device monitors the user's wake-up voice, that is, the voice including the preset wake-up word for waking up the smart device, and detects the surrounding voice information in real time after being activated by the wake-up voice. Assume that the volume of the sound around the initial moment is small, and it is in a mute state. When the volume of the sound is suddenly detected to be greater than a certain preset value, it can be determined that the user has input voice information, and then enters the voice phase, and the intelligence is entered. The device collects voice information during the voice phase. After a period of voice, the volume of the sound is less than the preset value, and enters the silent phase again.
  • the voice information input by the user is determined as the voice information to be recognized, that is, the smart device.
  • the collected voice information is determined as the voice information to be recognized.
  • the second preset duration can be freely set, and the second preset duration is preferably preferably 500 milliseconds.
  • step S110 when the electronic device is a cloud server that is in communication with the smart device, step S110 may include:
  • Receiving the to-be-identified voice information sent by the smart device; the voice information to be recognized sent by the smart device is: when the smart device detects the user inputting the voice information, when the mute duration reaches the second preset duration, the user is The input voice information is determined to be the voice information to be identified and sent to the cloud server.
  • the voice information is started to be acquired. After a period of the voice phase, the volume of the sound is less than the preset value, and the silent phase is entered again, and the silent phase is performed.
  • the smart device stops acquiring the voice information, and determines the voice information that has been obtained as the voice information to be recognized, and then sends the voice information to the cloud server, and the cloud server receives the voice information to be recognized sent by the smart device. .
  • the electronic device performs voice recognition on the voice information to obtain voice recognition information corresponding to the voice information to be recognized. Since the to-be-identified voice information can be the voice information to be recognized in any period of time, and is not the first voice information to be recognized received by the electronic device, the obtained voice recognition information can be defined as the current voice recognition information. In this embodiment, the specific process of voice recognition is prior art, and details are not described herein again.
  • step S130 Determine whether there is saved speech recognition information to be spliced; if yes, execute step S140; if not, perform step S180.
  • S140 Splicing the to-be-spliced voice recognition information and the current voice recognition information to obtain the stitched voice recognition information.
  • the speech recognition information to be spliced refers to speech recognition information without complete semantics, and the spliced speech recognition information still needs further splicing to obtain complete semantics.
  • the electronic device After the electronic device obtains the current voice recognition information, it is determined whether there is saved voice identification information to be stitched. If yes, it indicates that the voice sent by the user is incoherent, and the current voice recognition information is not received by the electronic device. A voice recognition information needs to be further spliced with the previously saved voice recognition information to be stitched, thereby obtaining the stitched voice recognition information.
  • the current voice recognition information is “that”, the linguistic word is “hmm”, and the voice recognition information to be spliced is “I want to listen”, then the current The speech recognition information is spliced with the speech recognition information to be spliced, and the spliced speech recognition information is “I want to listen + that”.
  • step S180 is performed.
  • step S150 Determine whether the spliced speech recognition information has complete semantics. If yes, go to step S160; if no, go to step S170.
  • S170 Determine the spliced voice recognition information as the voice recognition information to be spliced for saving, and continue to perform the step S110.
  • the electronic device determines whether it has a complete meaning. If there is complete semantics, the recognition process is successful, and the spliced speech recognition information is determined as a speech recognition result. If there is no complete semantics, the spliced speech recognition information is determined as the speech recognition information to be spliced for preservation, and continues to wait and acquire the next arriving speech information to be recognized, thereby ensuring the complete voiced electronic device issued by the user. Acquired, improved the speech recognition effect of incoherent speech.
  • step S180 If it is determined that there is no saved speech recognition information to be spliced, it is determined whether the current speech recognition information has complete semantics. If yes, step S190 is performed; if not, step S1100 is performed.
  • S1100 Determine the current voice recognition information as the voice recognition information to be stitched, and continue to perform the step S110.
  • the current speech recognition information is the first speech recognition information received by the electronic device
  • the voice recognition method obtains the voice information to be recognized; identifies the voice information to be recognized, obtains the current voice recognition information corresponding to the voice information to be recognized; and determines whether there is saved voice recognition information to be stitched; if present Splicing the spliced speech recognition information and the current speech recognition information to obtain spliced speech recognition information; determining whether the spliced speech recognition information has complete semantics; if so, determining the spliced speech recognition information as speech recognition The result; if not, the stitched voice recognition information is determined as the voice recognition information to be stitched for storage, and the voice information to be recognized is continuously obtained.
  • the complete semantic determination is performed on the current voice recognition information in the absence of the saved voice recognition information to be stitched, and the saved voice recognition information to be stitched and the current voice recognition information are stitched.
  • the stitched speech recognition information is obtained and judged whether it has complete semantics. If not, the voice information is continuously obtained, and then the voice recognition information is spliced until the complete semantics are obtained.
  • the embodiment of the present application ensures the integrity of the recognized semantics and improves the voice recognition effect of the incoherent voice.
  • step S150 may include the following steps:
  • step B4 Determine whether the response information is prompt information that cannot provide a service; if the response information is prompt information that cannot provide a service, perform step B5; if the response information is not prompt information that cannot provide a service, perform step B6.
  • the user intent is obtained by matching the semantic analysis result with the intention stored in the preset meaning gallery, and the response information corresponding to the user intention is obtained from the Italian library, and the response information is determined. Whether it is a prompt message that the service cannot be provided, thereby determining whether the speech recognition information has complete semantics.
  • This embodiment makes it easier to implement a process for determining whether speech recognition information has complete semantics.
  • step S150 may include the steps as shown in FIG. 2.
  • FIG. 2 is a flowchart of determining whether the stitched voice recognition information has complete semantics according to an embodiment of the present application.
  • Step B1 corresponds to step S210
  • step B2 corresponds to step S220 to step S270
  • step B3 corresponds to step S280
  • step B4 corresponds to step S290
  • step B5 corresponds to step S2100
  • step B6 corresponds to step S2110.
  • the voice recognition information may be input into a preset feature text extraction model to obtain multi-segment feature characters of each level output by the feature text extraction model.
  • the feature text extraction model is configured to perform semantic analysis on the speech recognition information, and obtain feature characters corresponding to each level in the preset tree structure meaning gallery.
  • all levels in the tree structure gallery may correspond to only one feature text extraction model.
  • the speech recognition result is input into the feature text extraction model, and multi-segment feature characters of each level output by the feature text extraction model are obtained.
  • S220 Determine the feature text corresponding to the first level as the feature text of the current level.
  • the electronic device may determine the feature text corresponding to the first level as the feature text of the current level, and determine all the intentions of the first level in the tree structure meaning library as the candidate intent to facilitate the execution of the subsequent steps.
  • the electronic device may match the determined feature level of the current level with each candidate intent to obtain a current intent.
  • the candidate intent of the successful matching may be directly used as the current intent.
  • the voice recognition information includes only the feature text corresponding to the first level, and then the electronic device can perform the above, because the feature text corresponding to the first level included in the voice recognition information has been matched.
  • the current intent is determined as the user's intent. For example, if the current intent is "listening to a song,” then the user's intention is "listening to a song.”
  • step S270 determining a feature character corresponding to the next level as the feature character of the current level; determining, as the candidate intent, all the intentions of the next level corresponding to the current intent in the tree structure meaning gallery; and returning to step S240.
  • the voice recognition information includes the feature text corresponding to the first level, and the feature text corresponding to the other level, because the feature text included in the voice recognition information only matches the completion.
  • the character text corresponding to one level the electronic device can determine the feature text corresponding to the next level as the feature text of the current level, and determine all the intentions of the next level corresponding to the current intent in the tree structure meaning library as candidates.
  • the method returns to step S240, that is, the feature character of the current level is matched with the candidate intent to obtain the current intention, and it can be understood that the feature character of the current level is the feature text corresponding to the second level.
  • the candidate intent is all the intent of the second level in the tree structure.
  • the electronic device can cyclically perform the above steps S240 and S250 until all the feature characters are matched.
  • the electronic device starts from the matching of the first level of the feature text and all the intentions of the first level in the tree structure, and then the second level of the feature text and the tree structure. All the intents of the second level are matched, and the third level of feature text is matched with all the intents of the third level in the tree structure meaning library, and the matching process is performed step by step according to this rule until the feature words of all levels The match is complete.
  • the current intention constitutes the finalized user intent.
  • the current intent is the intended intent of the match and the intent of each level of the match to be successful before the match.
  • step S290 determining whether the response information is prompt information that cannot provide a service; if the response information is prompt information that cannot provide a service, step S2100 is performed; if the response information is not prompt information for failing to provide a service, performing step S2110.
  • the meaning library includes a correspondence between all intent and response information
  • the electronic device matches the semantic parsing result with the intent stored in the preset meaning gallery to obtain the user intent.
  • the electronic device obtains the user's intention, it knows what kind of service the user needs, so as to provide the corresponding service to the user according to the correspondence between the intention and the response information, that is, the corresponding relationship between the intention and the service, or output the corresponding response information.
  • the response information includes: service response information corresponding to the user's intention, and prompt information that the user's intention is incomplete and cannot provide the service. For example: If the user intent is "I want to", the response information obtained may be "Sorry, the instruction is incomplete and cannot provide the service” and the like.
  • the electronic device may obtain the response information corresponding to the user's intention from the library. Determining whether the response information is prompt information that cannot provide a service, and if the response information is prompt information that cannot provide a service, determining that the stitched voice recognition information has no complete semantics; if the response information is not unable to provide a service The prompt information determines that the stitched speech recognition information has complete semantics.
  • the electronic device may further have a reminding function. Therefore, after the spliced voice recognition information is determined as the voice recognition information to be spliced for storage, The method further includes:
  • the saved speech recognition information to be stitched is semantically parsed to obtain a semantic parsing result; and the preset corresponding to the semantic parsing result is output to the user.
  • Service prompt voice information If the voice information to be recognized is not obtained when the first preset duration is reached, the saved speech recognition information to be stitched is semantically parsed to obtain a semantic parsing result; and the preset corresponding to the semantic parsing result is output to the user.
  • Service prompt voice information is not obtained when the first preset duration is reached.
  • the first preset duration may be used as a measure of the length of time from the start time of the voice information sent by the user to the current time.
  • the electronic device does not obtain the voice information to be recognized, indicating that the user hesitates for a long time in order to say a complete sentence. For example, if the voice message sent by the user is "I want to listen to... that... um", the first preset duration at this time may be from the start time of "I want to listen" to the time after "Well", that is, the current time.
  • the measure of the length of time may be used as a measure of the length of time from the start time of the voice information sent by the user to the current time.
  • the first preset duration may also be used as a measure of the length of time between the moment the user last issued the voice message and the current time.
  • the electronic device does not obtain the voice information to be recognized, indicating that the user hesitates to say some words in a sentence. Long time. For example, if the voice message sent by the user is "I want to listen to... that... um", the first preset duration at this time may be the start time from the user to "hmm" to the time after, that is, the length of the current time. Metrics.
  • the first preset duration can be freely set, and the longer the first preset duration is, the longer the server can wait for the user to hesitate to speak.
  • the first preset duration may be 4 seconds.
  • the electronic device when the electronic device reaches the first preset duration, the voice information to be recognized is not obtained, and the electronic device may have saved the voice recognition information to be stitched, indicating that the user does not finish a complete sentence or does not say a sentence. If the electronic device is unable to make a targeted processing response, the electronic device can perform semantic analysis on the saved speech recognition information to be stitched to obtain a semantic analysis result, and output the preset and the user to the user.
  • the service prompt voice information corresponding to the semantic analysis result.
  • the service prompt voice information about “I want to listen” can be preset, and the service prompt voice message can be "Do you want to listen to the song, please tell me this way, I want to listen to the water"
  • the voice recognition information to be stitched by the electronic device is "I want to listen + that + um"
  • the service prompt voice information is output to the user.
  • the electronic device when the user is hesitant for a long time, the electronic device can also be provided with a service prompt function, which increases the intelligence of the electronic device and improves the user experience.
  • the method further includes:
  • the voice recognition failure prompt voice information is output to the user.
  • the voice information to be recognized is not obtained, indicating that the user cannot say the specific service content for a long time, and the user may not be able to think of the specific service content.
  • the electronic device does not need to continue to wait for the incoming voice information to be recognized, and the electronic device can output the voice recognition failure prompt voice message to the user.
  • voice recognition failure prompts voice information can be "I'm sorry, I didn't understand", "Please repeat it again”, "What service do you need”, etc.
  • the electronic device may enter a low power standby state after outputting the voice recognition failure prompting voice information to the user.
  • the electronic device may also have the function of outputting voice recognition failure prompting voice information, which increases the intelligence of the electronic device and improves the user experience.
  • the electronic device may perform semantic analysis on the voice recognition result, and determine, according to semantic analysis, that the user is provided with a corresponding service.
  • the smart device may perform semantic analysis on the voice recognition result, and determine to provide a corresponding service for the user according to the semantic analysis. Assuming that the parsing result is an instruction to play audio in the smart device, the instruction is executed to play the corresponding audio.
  • the cloud server may perform semantic analysis on the voice recognition result, and determine, according to the semantic analysis, the corresponding service for the user. Assuming that the parsing result is an instruction to play audio in the cloud server, the instruction is executed to send the corresponding audio to the smart device to enable the smart device to play the audio.
  • FIG. 3 is a schematic structural diagram of a voice recognition apparatus according to an embodiment of the present disclosure, where the apparatus includes:
  • the obtaining module 310 is configured to obtain voice information to be recognized
  • the identification module 320 is configured to identify the to-be-identified voice information, and obtain current voice recognition information corresponding to the to-be-identified voice information;
  • the first determining module 330 is configured to determine whether there is saved speech identification information to be spliced
  • the splicing module 340 is configured to splicing the to-be-spliced voice recognition information and the current voice recognition information when the determination result of the first determining module 330 is present, to obtain the stitched voice recognition information;
  • the first determining module 350 is configured to determine whether the stitched voice recognition information has complete semantics
  • the second determining module 360 is configured to determine, when the determination result of the first determining module 350 is YES, the stitched voice recognition information as a voice recognition result;
  • the third determining module 370 is configured to: when the determination result of the first determining module 350 is negative, determine the stitched voice recognition information as the voice recognition information to be stitched, and trigger the acquiring module 310.
  • the voice recognition device obtains the voice information to be recognized, identifies the voice information to be recognized, obtains the current voice recognition information corresponding to the voice information to be recognized, and determines whether there is saved voice recognition information to be stitched; if present Splicing the spliced speech recognition information and the current speech recognition information to obtain spliced speech recognition information; determining whether the spliced speech recognition information has complete semantics; if so, determining the spliced speech recognition information as speech recognition The result; if not, the stitched voice recognition information is determined as the voice recognition information to be stitched for storage, and the voice information to be recognized is continuously obtained.
  • the complete semantic determination is performed on the current voice recognition information in the absence of the saved voice recognition information to be stitched, and the saved voice recognition information to be stitched and the current voice recognition information are stitched.
  • the stitched speech recognition information is obtained and judged whether it has complete semantics. If not, the voice information is continuously obtained, and then the voice recognition information is spliced until the complete semantics are obtained.
  • the embodiment of the present application ensures the integrity of the recognized semantics and improves the voice recognition effect of the incoherent voice.
  • the device further includes:
  • the second determining module 380 is configured to: when the first determining module 370 determines that there is no saved voice recognition information to be stitched, determine whether the current voice recognition information has complete semantics;
  • the fourth determining module 390 is configured to determine, according to the determination result of the second determining module 380, the current voice recognition information as a voice recognition result;
  • the fifth determining module 3100 is configured to: when the determination result of the second determining module 380 is not, determine the current voice recognition information as the voice recognition information to be stitched, and trigger the acquiring module 310.
  • FIG. 4 is a schematic structural diagram of a first determining module according to an embodiment of the present disclosure, where the first determining module 350 includes:
  • the parsing unit 351 is configured to perform semantic analysis on the stitched speech recognition information to obtain a semantic parsing result
  • the matching unit 352 is configured to match the semantic parsing result with an intent stored in a preset Italian gallery to obtain a user intent;
  • the obtaining unit 352 is configured to obtain, from the Italian gallery, response information corresponding to the user intent;
  • the determining unit 354 is configured to determine whether the response information is prompt information that cannot provide a service
  • the first determining unit 355 is configured to: when the determining unit determines that the response information is the prompt information that cannot provide the service, determine that the stitched voice recognition information has no complete semantics;
  • the second determining unit 356 is configured to: when the determining unit determines that the response information is not the prompt information that cannot provide the service, determine that the stitched voice recognition information has complete semantics.
  • the user intent is obtained by matching the semantic analysis result with the intention stored in the preset meaning gallery, and the response information corresponding to the user intention is obtained from the Italian library, and the response information is determined. Whether it is a prompt message that the service cannot be provided, thereby determining whether the speech recognition information has complete semantics.
  • This embodiment makes it easier to implement a process for determining whether speech recognition information has complete semantics.
  • the meaning library is a tree structure meaning gallery
  • the parsing unit 351 is specifically configured to extract, according to a preset rule, a plurality of pieces of feature texts from the voice recognition information, wherein each piece of feature text has a one-to-one correspondence with each level in the preset tree structure meaning gallery;
  • the matching unit 352 includes:
  • the first determining sub-unit 3521 is configured to determine the feature text corresponding to the first level as the feature text of the current level
  • a second determining sub-unit 3522 configured to determine, as a candidate intent, all intents of the first level in the tree structure meaning library
  • a matching sub-unit 3523 configured to match the feature words of the current level with the respective candidate intents to obtain a current intent
  • a determining subunit 3524 configured to determine whether all feature texts are matched
  • a third determining sub-unit 3525 configured to determine a current intent as a user intent when the determining result of the determining subunit is YES;
  • a fourth determining subunit 3526 configured to: when the determining result of the determining subunit is negative, determine the feature text corresponding to the next level as the feature text of the current level; and the current intent in the tree structure meaning gallery All intents of the corresponding next level are determined as candidate intent; the matching sub-unit 3523 is triggered.
  • the device further includes: a third determining module 3110, configured to determine whether the voice information to be recognized is obtained when the first preset duration is reached;
  • the parsing module 3120 is configured to perform semantic analysis on the saved speech recognition information to be stitched when the third judging module determines that the first preset duration is reached, and obtain the semantic analysis result. ;
  • the first output module is configured to output, to the user, the preset service prompt voice information corresponding to the semantic analysis result.
  • the device further includes:
  • the second output module is configured to: if the voice information to be recognized is not obtained when the first preset duration is reached, output voice recognition failure prompt voice information to the user.
  • the electronic device is a smart device
  • the obtaining module includes:
  • a detecting unit configured to detect voice information in real time
  • the third determining unit is configured to determine the voice information input by the user as the to-be-identified voice information when the silence duration reaches the second preset duration after detecting the user inputting the voice information.
  • the electronic device is a cloud server communicatively connected to the smart device;
  • the acquiring module is specifically configured to receive the to-be-identified voice information sent by the smart device; the to-be-identified voice information sent by the smart device is: when the smart device detects the user inputting the voice information, when the mute duration reaches the first When the preset time is long, the voice information input by the user is determined as the voice information to be recognized, and then sent to the cloud server.
  • FIG. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure, where the electronic device includes:
  • a housing 510 a processor 520, a memory 530, a circuit board 540, and a power supply circuit 550, wherein the circuit board 540 is disposed inside a space surrounded by the housing 510, and the processor 520 and the memory 530 are disposed on the circuit board 540; 550, for powering various circuits or devices of the electronic device;
  • the memory 530 is configured to store executable program code;
  • the processor 520 is configured to execute a program corresponding to the executable program code by reading the executable program code stored in the memory 530, For performing the speech recognition method described in the above method embodiments.
  • the foregoing voice recognition method may include:
  • the spliced speech recognition information and the current speech recognition information are spliced to obtain spliced speech recognition information
  • the spliced speech recognition information is determined as the spliced speech recognition information for saving, and the step of obtaining the to-be-identified voice information is continued.
  • Mobile communication devices These devices are characterized by mobile communication functions and are mainly aimed at providing voice and data communication.
  • Such terminals include: smart phones (such as iPhone), multimedia phones, functional phones, and low-end phones.
  • Ultra-mobile personal computer equipment This type of equipment belongs to the category of personal computers, has computing and processing functions, and generally has mobile Internet access.
  • Such terminals include: PDAs, MIDs, and UMPC devices, such as the iPad.
  • Portable entertainment devices These devices can display and play multimedia content. Such devices include: audio, video players (such as iPod), handheld game consoles, e-books, and smart toys and portable car navigation devices.
  • the server consists of a processor, a hard disk, a memory, a system bus, etc.
  • the server is similar to a general-purpose computer architecture, but because of the need to provide highly reliable services, processing power and stability High reliability in terms of reliability, security, scalability, and manageability.
  • the processor of the electronic device runs the program corresponding to the executable program code by reading the executable program code stored in the memory, and obtains the voice information to be recognized; Identifying, obtaining current voice recognition information corresponding to the to-be-identified voice information; determining whether there is saved voice recognition information to be stitched; if present, stitching the stitched voice recognition information and current voice recognition information to obtain stitched voice recognition information Determining whether the spliced speech recognition information has complete semantics; if yes, determining the spliced speech recognition information as a speech recognition result; if not, determining the spliced speech recognition information as the spliced speech recognition information for saving And continue to get the voice information to be recognized.
  • the complete semantic determination is performed on the current voice recognition information in the absence of the saved voice recognition information to be stitched, and the saved voice recognition information to be stitched and the current voice recognition information are stitched.
  • the stitched speech recognition information is obtained and judged whether it has complete semantics. If not, the voice information is continuously obtained, and then the voice recognition information is spliced until the complete semantics are obtained.
  • the embodiment of the present application ensures the integrity of the recognized semantics and improves the voice recognition effect of the incoherent voice.
  • the above method may further include:
  • the current voice recognition information is determined as a voice recognition result
  • the current voice recognition information is determined as the voice recognition information to be stitched for saving, and the step of obtaining the voice information to be recognized is further performed.
  • the step of determining whether the spliced voice recognition information has complete semantics may include:
  • response information is prompt information that cannot provide a service, determining that the stitched speech recognition information has no complete semantics
  • the response information is not prompt information that cannot provide a service, it is determined that the stitched speech recognition information has complete semantics.
  • the above-mentioned Italian library can be a tree structure meaning gallery
  • the step of performing semantic analysis on the spliced speech recognition information to obtain a semantic parsing result including:
  • the step of matching the semantic parsing result with the intent stored in the preset meaning gallery to obtain the user intent includes:
  • the method may further include: after the spliced voice recognition information is determined as the voice recognition information to be spliced for storage, the method may further include:
  • the saved speech recognition information to be stitched is semantically parsed to obtain a semantic analysis result
  • the preset service prompt voice information corresponding to the semantic parsing result is output to the user.
  • the method may further include: after the spliced voice recognition information is determined as the voice recognition information to be spliced for storage, the method may further include:
  • the voice recognition failure prompt voice information is output to the user.
  • the above electronic device may be a smart device
  • the step of obtaining the voice information to be identified may include:
  • the voice information input by the user is determined as the voice information to be recognized.
  • the electronic device is a cloud server that can communicate with the smart device
  • the step of obtaining the to-be-identified voice information includes: receiving the to-be-identified voice information sent by the smart device; the to-be-identified voice information sent by the smart device is: the smart device is muted after detecting the user inputting the voice information When the duration reaches the second preset duration, the voice information input by the user is determined as the voice information to be recognized, and then sent to the cloud server.
  • the embodiment of the invention further provides a computer readable storage medium, wherein the computer readable storage medium stores a computer program, and when the computer program is executed by the processor, the following steps are implemented:
  • the spliced speech recognition information and the current speech recognition information are spliced to obtain spliced speech recognition information
  • the spliced speech recognition information is determined as the spliced speech recognition information for saving, and the step of obtaining the to-be-identified voice information is continued.
  • the complete semantic determination of the current voice recognition information in the absence of the saved voice recognition information to be stitched is performed, and the saved is performed.
  • the spliced speech recognition information and the current speech recognition information are spliced, and the spliced speech recognition information is obtained, and whether the complete semantics is determined. If not, the voice information is continuously obtained, and then the voice recognition information is spliced until the complete semantics are obtained.
  • the embodiment of the present application ensures the integrity of the recognized semantics and improves the voice recognition effect of the incoherent voice.
  • the above method may further include:
  • the current voice recognition information is determined as a voice recognition result
  • the current voice recognition information is determined as the voice recognition information to be stitched for saving, and the step of obtaining the voice information to be recognized is further performed.
  • the step of determining whether the spliced voice recognition information has complete semantics may include:
  • response information is prompt information that cannot provide a service, determining that the stitched speech recognition information has no complete semantics
  • the response information is not prompt information that cannot provide a service, it is determined that the stitched speech recognition information has complete semantics.
  • the above-mentioned Italian library can be a tree structure meaning gallery
  • the step of performing semantic analysis on the spliced speech recognition information to obtain a semantic parsing result including:
  • the step of matching the semantic parsing result with the intent stored in the preset meaning gallery to obtain the user intent includes:
  • the method may further include: after the spliced voice recognition information is determined as the voice recognition information to be spliced for storage, the method may further include:
  • the saved speech recognition information to be stitched is semantically parsed to obtain a semantic analysis result
  • the preset service prompt voice information corresponding to the semantic parsing result is output to the user.
  • the method may further include: after the spliced voice recognition information is determined as the voice recognition information to be spliced for storage, the method may further include:
  • the voice recognition failure prompt voice information is output to the user.
  • the computer readable storage medium is a readable storage medium of the smart device
  • the step of obtaining the voice information to be identified may include:
  • the voice information input by the user is determined as the voice information to be recognized.
  • the computer readable storage medium is a readable storage medium of a cloud server communicatively coupled to the smart device;
  • the step of obtaining the to-be-identified voice information includes: receiving the to-be-identified voice information sent by the smart device; the to-be-identified voice information sent by the smart device is: the smart device is muted after detecting the user inputting the voice information When the duration reaches the second preset duration, the voice information input by the user is determined as the voice information to be recognized, and then sent to the cloud server.
  • the embodiment of the present invention further provides an application program, which is used to execute the user registration method provided by the embodiment of the present application at runtime.
  • the application implements the following steps when executed by the processor:
  • the spliced speech recognition information and the current speech recognition information are spliced to obtain spliced speech recognition information
  • the spliced speech recognition information is determined as the spliced speech recognition information for saving, and the step of obtaining the to-be-identified voice information is continued.
  • the complete semantic determination of the current voice recognition information in the absence of the saved voice recognition information to be stitched is performed, and the saved state is saved.
  • the spliced speech recognition information and the current speech recognition information are spliced, and the spliced speech recognition information is obtained, and whether the complete semantics is determined. If not, the voice information is continuously obtained, and then the voice recognition information is spliced until the complete semantics are obtained.
  • the embodiment of the present application ensures the integrity of the recognized semantics and improves the voice recognition effect of the incoherent voice.
  • the above method may further include:
  • the current voice recognition information is determined as a voice recognition result
  • the current voice recognition information is determined as the voice recognition information to be stitched for saving, and the step of obtaining the voice information to be recognized is further performed.
  • the step of determining whether the spliced speech recognition information has complete semantics may include:
  • response information is prompt information that cannot provide a service, determining that the stitched speech recognition information has no complete semantics
  • the response information is not prompt information that cannot provide a service, it is determined that the stitched speech recognition information has complete semantics.
  • the above-mentioned Italian library can be a tree structure meaning gallery
  • the step of performing semantic analysis on the spliced speech recognition information to obtain a semantic parsing result including:
  • the step of matching the semantic parsing result with the intent stored in the preset meaning gallery to obtain the user intent includes:
  • the method may further include: after the spliced voice recognition information is determined as the voice recognition information to be spliced for storage, the method may further include:
  • the saved speech recognition information to be stitched is semantically parsed to obtain a semantic analysis result
  • the preset service prompt voice information corresponding to the semantic parsing result is output to the user.
  • the method may further include: after the spliced voice recognition information is determined as the voice recognition information to be spliced for storage, the method may further include:
  • the voice recognition failure prompt voice information is output to the user.
  • the above application is stored in a smart device
  • the step of obtaining the voice information to be identified may include:
  • the voice information input by the user is determined as the voice information to be recognized.
  • the foregoing application is stored in a cloud server connected to the smart device;
  • the step of obtaining the to-be-identified voice information may include: receiving the to-be-identified voice information sent by the smart device; the to-be-identified voice information sent by the smart device is: after the smart device detects the user inputting the voice information, When the mute duration reaches the second preset duration, the voice information input by the user is determined as the to-be-identified voice information, and then sent to the cloud server.
  • the description is relatively simple, and the relevant parts can be referred to the description of the method embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • Telephonic Communication Services (AREA)
  • Machine Translation (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

提供了一种语音识别方法、装置及电子设备,该语音识别方法包括:获得待识别语音信息(S110);对待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息(S120);判断是否存在已保存的待拼接语音识别信息(S130);如果存在,对待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息(S140);确定拼接后的语音识别信息是否有完整的语义(S150);如果是,则将拼接后的语音识别信息确定为语音识别结果(S160);如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续执行获得待识别语音信息的步骤(S170)。该语音识别方法保证了识别出的语义的完整性,提高了不连贯语音的语音识别效果。

Description

一种语音识别方法、装置及电子设备
本申请要求于2017年4月10日提交中国专利局、申请号为201710229218.8、发明名称为“一种语音识别方法、装置及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及语音识别技术领域,特别是涉及一种语音识别方法、装置及电子设备。
背景技术
目前,很多智能设备具有语音识别功能。通常语音识别的功能可以通过以下2种方式来实现:
一种是:由智能设备接收语音指令信息,并对语音指令信息进行识别,获得识别出的指令信息,针对识别出的指令信息进行响应。
另一种是:由智能设备接收语音指令信息,并将该语音执行信息发送至云端服务器,由云端服务器对语音指令信息进行识别,获得识别出的指令信息,针对识别出的指令信息进行响应,将响应信息返回给智能设备。
现实生活中,用户在说出语音指令信息的时候,经常会由于犹豫不决而说话不连贯。例如,用户在想听音乐但又一时想不起具体的歌曲时,经常会说出类似于“我想听…那个…嗯…忘情水”的话。
这种情况下,不论是利用上述哪种语音识别的方式进行语音识别都会出错。这是因为,现有技术通常只对连续的语音进行识别,中间出现停顿,就会认为该句话已经说完,便开始进行语音识别。如上述情况,只会识别出“我想听”,后面的话都被忽略掉了。这样,智能设备会输出“语音指令错误,请重新输入”或“对不起,没听懂”等类似的报错提示。
也就是说,由于不连贯语音中静音片段的存在,现有的语音识别方法在识别此类不连贯的语音时,通常会出导致识别出的语意不完整,影响语音识别效果。
发明内容
本申请的目的在于提供一种语音识别方法、装置及电子设备,以提高不连贯语音的语音识别效果。
为达到上述目的,本申请实施例提供了一种语音识别方法,应用于电子设备,所述方法包括:
获得待识别语音信息;
对所述待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;
判断是否存在已保存的待拼接语音识别信息;
如果存在,对所述待拼接语音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息;
确定所述拼接后的语音识别信息是否有完整的语义;
如果是,则将所述拼接后的语音识别信息确定为语音识别结果;
如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
可选地,所述方法还包括:
若判断出不存在已保存的待拼接语音识别信息,则判断当前语音识别信息是否有完整的语义;
如果有,则将当前语音识别信息确定为语音识别结果;
如果没有,则将当前语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
可选地,所述确定所述拼接后的语音识别信息是否有完整的语义的步骤,包括:
对所述拼接后的语音识别信息进行语义解析,获得语义解析结果;
将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户 意图;
从所述意图库中获得所述用户意图对应的响应信息;
判断所述响应信息是否为不能提供服务的提示信息;
如果所述响应信息是不能提供服务的提示信息,则确定所述拼接后的语音识别信息没有完整的语义;
如果所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
可选地,所述的意图库为树形结构意图库;
所述对所述拼接后的语音识别信息进行语义解析,获得语义解析结果的步骤,包括;
按预设规则,从所述语音识别信息中提取多段特征文字,其中各段特征文字与预设的树形结构意图库中的各个级别一一对应;
所述将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图的步骤,包括:
将第一级别对应的特征文字确定为当前级别的特征文字;
将所述树形结构意图库中第一级别的所有意图确定为候选意图;
将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图;
判断是否所有特征文字匹配完成;
若是,则将当前意图确定为用户意图;
若否,则将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;
返回所述将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图的步骤。
可选地,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保 存后,所述方法还包括:
若在第一预设时长达到时,未获得待识别语音信息,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;
向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
可选地,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,所述方法还包括:
若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
可选地,所述电子设备为智能设备;
所述获得待识别语音信息的步骤,包括:
实时检测语音信息;
在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
可选地,所述电子设备为与智能设备通信连接的云端服务器;
所述获得待识别语音信息的步骤,包括:接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
本申请实施例还提供了一种语音识别装置,应用于电子设备,所述装置包括:
获取模块,用于获得待识别语音信息;
识别模块,用于对所述待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;
第一判断模块,用于判断是否存在已保存的待拼接语音识别信息;
拼接模块,用于当所述判断模块的判断结果为存在时,对所述待拼接语 音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息;
第一确定模块,用于确定所述拼接后的语音识别信息是否有完整的语义;
第二确定模块,用于当所述第一确定模块的确定结果为是时,则将所述拼接后的语音识别信息确定为语音识别结果;
第三确定模块,用于当所述第一确定模块的确定结果为否时,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并触发所述获取模块。
可选地,所述装置还包括:
第二判断模块,用于当所述第一判断模块判断出不存在已保存的待拼接语音识别信息时,则判断当前语音识别信息是否有完整的语义;
第四确定模块,用于当所述第二判断模块的判断结果为有时,则将当前语音识别信息确定为语音识别结果;
第五确定模块,用于当所述第二判断模块的判断结果为没有时,则将当前语音识别信息确定为待拼接语音识别信息进行保存,并触发所述获取模块。
可选地,所述第一确定模块,包括:
解析单元,用于对所述拼接后的语音识别信息进行语义解析,获得语义解析结果;
匹配单元,用于将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图;
获取单元,用于从所述意图库中获得所述用户意图对应的响应信息;
判断单元,用于判断所述响应信息是否为不能提供服务的提示信息;
第一确定单元,用于当判断单元的判断出所述响应信息是不能提供服务的提示信息时,则确定所述拼接后的语音识别信息没有完整的语义;
第二确定单元,用于当判断单元的判断出所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
可选地,所述的意图库为树形结构意图库;
所述解析单元,具体用于按预设规则,从所述语音识别信息中提取多段特征文字,其中各段特征文字与预设的树形结构意图库中的各个级别一一对应;
所述匹配单元,包括:
第一确定子单元,用于将第一级别对应的特征文字确定为当前级别的特征文字;
第二确定子单元,用于将所述树形结构意图库中第一级别的所有意图确定为候选意图;
匹配子单元,用于将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图;
判断子单元,用于判断是否所有特征文字匹配完成;
第三确定子单元,用于当所述判断子单元的判断结果为是时,则将当前意图确定为用户意图;
第四确定子单元,用于当所述判断子单元的判断结果为否时,则将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;触发所述匹配子单元。
可选地,所述装置还包括:
解析模块,用于若在第一预设时长达到时,未获得待识别语音信息,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;
第一输出模块,用于向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
可选地,所述装置还包括:
第二输出模块,用于若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
可选地,所述电子设备为智能设备;
所述获取模块,包括:
检测单元,用于实时检测语音信息;
第三确定单元,用于在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
可选地,所述电子设备为与智能设备通信连接的云端服务器;
所述获取模块,具体用于接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
本申请实施例还提供了一种电子设备,所述电子设备包括:壳体、处理器、存储器、电路板和电源电路,其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路,用于为电子设备的各个电路或器件供电;存储器用于存储可执行程序代码;处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行所述的语音识别方法。
本申请实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现上述的语音识别方法。
本申请实施例还提供了应用程序,所述应用程序用于在运行时执行上述的语音识别方法。
本申请实施例提供的一种语音识别方法、装置及电子设备,获得待识别语音信息;对待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;判断是否存在已保存的待拼接语音识别信息;如果存在,对待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息;确定拼接后的语音识别信息是否有完整的语义;如果是,则将拼接后的语音识别信息确定为语音识别结果;如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续获得待识别语音信息。
本申请实施例中,通过对不存在已保存的待拼接语音识别信息情况下的当前语音识别信息进行完整的语义的判断,以及对已保存的待拼接语音识别 信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息,并判断其是否有完整的语义。如果没有,则继续获得语音信息,进而继续对语音识别信息进行拼接,直至得到完整的语义;本申请实施例保证了识别出的语义的完整性,提高了不连贯语音的语音识别效果。
附图说明
为了更清楚地说明本申请实施例或现有技术中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请实施例提供的语音识别方法的流程图;
图2为本申请实施例提供的确定拼接后的语音识别信息是否有完整的语义的流程图;
图3为本申请实施例提供的语音识别装置的结构示意图;
图4为本申请实施例提供的第一确定模块的结构示意图;
图5为本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
为达到上述目的,本申请实施例提供了一种语音识别方法,该方法可以应用于电子设备,该电子设备可以为智能设备,也可以为与智能设备通信连接的云端服务器。本申请实施例中,智能设备可以是智能手机、智能音箱、智能机器人或智能平板电脑等带有语音识别功能的设备。
图1为本申请实施例提供的语音识别方法的流程图,该方法包括:
S110,获得待识别语音信息。
本实施例中,语音信息为包含用户发出语音的语音信息。
具体地,电子设备可监听其周围的声音,获取相应的语音信息并将其作为待识别语音信息。
在本申请实施例的一种具体实现方式中,当电子设备为智能设备时,步骤S110可以包括:
A1、实时检测语音信息。
A2、在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
具体地,智能设备监控到用户的唤醒语音,也就是包含预设的用于唤醒智能设备的唤醒词的语音,被唤醒语音激活以后,实时检测周围的语音信息。假设初始时刻周围的声音的音量较小,此时处于静音状态,当突然检测到声音的音量大于某一个预设值的时候,则可确定当前有用户输入语音信息,此时进入语音阶段,智能设备采集语音阶段的语音信息。经过一段时间的语音阶段后,声音的音量小于预设值,再次进入静音阶段,当静音阶段的时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息,即将智能设备采集到的语音信息确定为待识别语音信息。本实施例中,第二预设时长可自由设定,第二预设时长优选优选为500毫秒。
在本申请实施例的另一种具体实现方式中,当电子设备为与智能设备通信连接的云端服务器时,步骤S110可以包括:
接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
具体地,当智能设备在检测到声音的音量大于某一个预设值的时候,开始获取语音信息,经过一段时间的语音阶段后,声音的音量小于预设值,再次进入静音阶段,且静音阶段的时长达到第二预设时长时,智能设备停止获取语音信息,并将已经获取的语音信息确定为待识别语音信息后,将其发送给云端服务器,云端服务器接收智能设备发送的待识别语音信息。
S120,对所述待识别语音信息进行识别,获得该待识别语音信息对应的 当前语音识别信息。
具体地,在获取了待识别语音信息后,电子设备对其进行语音识别,得到该待识别语音信息对应的语音识别信息。由于待识别语音信息可以为任一时间段的待识别语音信息,而并非为电子设备接收到的第一个待识别语音信息,因此,可将得到的语音识别信息定义为当前语音识别信息。本实施例中,语音识别的具体过程为现有技术,此处不再赘述。
S130,判断是否存在已保存的待拼接语音识别信息;如果存在,执行步骤S140;如果不存在,执行步骤S180。
S140,对所述待拼接语音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息。
本实施例中,所谓待拼接语音识别信息指的是没有完整语义的语音识别信息,待拼接语音识别信息仍需要进一步的拼接,方能得到完整的语义。
具体地,当电子设备获得当前语音识别信息后,判断是否存在已保存的待拼接语音识别信息,如果存在,说明用户发出的语音是不连贯的,且当前语音识别信息不是电子设备接收到的第一个语音识别信息,需要与之前的已保存的待拼接语音识别信息进行进一步的拼接,进而得到拼接后的语音识别信息。
举例而言,当用户发送的语音为“那个…嗯”的时候,则当前语音识别信息为“那个”,语意词为“嗯”,待拼接语音识别信息为“我想听”,则将当前语音识别信息与待拼接语音识别信息进行拼接,得到的拼接后的语音识别信息为“我想听+那个”。
具体地,如果不存在已保存的待拼接语音识别信息,说明当前语音识别信息是电子设备接收到的第一个语音识别信息,则执行步骤S180。
S150,确定所述拼接后的语音识别信息是否有完整的语义。如果是,执行步骤S160;如果否,执行步骤S170。
S160,将所述拼接后的语音识别信息确定为语音识别结果。
S170,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存, 并继续执行所述步骤S110。
具体地,电子设备在得到拼接后的语音识别信息后,确定其是否有完整的意义,如果有完整的语义,则识别过程成功,将拼接后的语音识别信息确定为语音识别结果。如果没有完整的语义,则将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续等待并获取下一个到达的待识别语音信息,从而保证了用户发出的语音完整的被电子设备获取到,提高了不连贯语音的语音识别效果。
S180,若判断出不存在已保存的待拼接语音识别信息,则判断当前语音识别信息是否有完整的语义。如果有,执行步骤S190;如果没有,执行步骤S1100。
S190,将当前语音识别信息确定为语音识别结果。
S1100,将当前语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述步骤S110。
具体地,如果不存在已保存的待拼接语音识别信息,说明当前语音识别信息是电子设备接收到的第一个语音识别信息,则直接判断当前语音识别信息是否有完整的语义。如果有完整的语义,则表明用户当前发出的语音是连贯的,则将当前语音识别信息确定为语音识别结果。如果没有完整的语义,则表明用户当前发出的语音是不连贯的,可以将当前语音识别信息确定为待拼接语音识别信息进行保存,并继续等待并获取下一个到达的待识别语音信息,进一步保证了用户发出的语音完整的被电子设备获取到,提高了不连贯语音的语音识别效果。
本申请实施例提供的语音识别方法,获得待识别语音信息;对待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;判断是否存在已保存的待拼接语音识别信息;如果存在,对待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息;确定拼接后的语音识别信息是否有完整的语义;如果是,则将拼接后的语音识别信息确定为语音识别结果;如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续获得待识别语音信息。
本申请实施例中,通过对不存在已保存的待拼接语音识别信息情况下的当前语音识别信息进行完整的语义的判断,以及对已保存的待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息,并判断其是否有完整的语义。如果没有,则继续获得语音信息,进而继续对语音识别信息进行拼接,直至得到完整的语义;本申请实施例保证了识别出的语义的完整性,提高了不连贯语音的语音识别效果。
在本申请实施例的一种具体实现方式中,步骤S150可包括如下步骤:
B1、对所述拼接后的语音识别信息进行语义解析,获得语义解析结果。
B2、将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图。
B3、从所述意图库中获得所述用户意图对应的响应信息。
B4、判断所述响应信息是否为不能提供服务的提示信息;如果所述响应信息是不能提供服务的提示信息,则执行步骤B5;如果所述响应信息不是不能提供服务的提示信息,则执行步骤B6。
B5、确定所述拼接后的语音识别信息没有完整的语义。
B6、确定所述拼接后的语音识别信息有完整的语义。
本实施例中,通过将所述语义解析结果与预设的意图库中存储的意图进行匹配获得用户意图,并从所述意图库中获得所述用户意图对应的响应信息,判断所述响应信息是否为不能提供服务的提示信息,从而确定语音识别信息是否有完整语义。本实施例可使确定语音识别信息是否有完整语义的过程更加易于实现。
为了对上述步骤S150进一步解释,步骤S150可包括如图2所示的步骤。图2为本申请实施例提供的确定所述拼接后的语音识别信息是否有完整的语义的流程图。其中,步骤B1与步骤S210对应,步骤B2与步骤S220~步骤S270对应,步骤B3与步骤S280对应,步骤B4与步骤S290对应,步骤B5与步骤S2100对应,步骤B6与步骤S2110对应。
S210,按预设规则,从所述语音识别信息中提取多段特征文字,其中各 段特征文字与预设的树形结构意图库中的各个级别一一对应。
具体地,可将该语音识别信息输入预设的特征文字提取模型,获得特征文字提取模型输出的各个级别的多段特征文字。
其中,特征文字提取模型用于对所述语音识别信息进行语义解析,获得与预设的树形结构意图库中各个级别对应的特征文字。本实施例中,树形结构意图库中的所有级别可以只对应一个特征文字提取模型。在输入时,将语音识别结果输入该特征文字提取模型,获得该特征文字提取模型输出的各个级别的多段特征文字。
S220,将第一级别对应的特征文字确定为当前级别的特征文字。
S230,将所述树形结构意图库中第一级别的所有意图确定为候选意图;
具体地,电子设备可以将第一级别对应的特征文字确定为当前级别的特征文字,将上述树形结构意图库中第一级别的所有意图确定为候选意图,便于后续步骤的执行。
S240,将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图。
具体地,电子设备可以将上述确定的当前级别的特征文字与各个候选意图进行匹配,以获得当前意图,一种实现方式中,可以直接将匹配成功的候选意图作为当前意图。
S250,判断是否所有特征文字匹配完成;若是,则执行步骤S260;若否,则执行步骤S270。
S260,将当前意图确定为用户意图;
如果所有特征文字匹配完成,说明上述语音识别信息中仅包含第一级别对应的特征文字,那么由于此时语音识别信息中包含的第一级别对应的特征文字已经匹配完成,电子设备便可以将上述当前意图确定为用户意图。例如,上述当前意图为“听歌曲”,那么用户意图即为“听歌曲”。
S270,将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;返 回步骤S240。
如果所有特征文字未匹配完成,说明上述语音识别信息中除包含第一级别对应的特征文字外,还包括其他级别对应的特征文字,那么由于此时语音识别信息中包含的特征文字只匹配完成第一级别对应的特征文字,电子设备便可以将下一级别对应的特征文字确定为当前级别的特征文字吗,并将上述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图,进而,返回步骤S240,即将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图,可以理解的是,此时当前级别的特征文字为第二级别对应的特征文字,候选意图为树形结构意图库中第二级别的所有意图。
这样,电子设备便可以循环执行上述步骤S240、步骤S250直到所有特征文字匹配完成。可以理解的是,在匹配时,电子设备从第一级别的特征文字与树形结构意图库中第一级别的所有意图的匹配开始执行,然后将第二级别的特征文字与树形结构意图库中的第二级别的所有意图进行匹配,将第三级别的特征文字与树形结构意图库中的第三级别的所有意图进行匹配,依此规律逐级执行匹配过程,直至所有级别的特征文字匹配完成。
可以理解的是,当所有特征文字均匹配完成时,当前意图即构成了最终确定的用户意图。当前意图为本次匹配成功的候选意图以及在本次匹配之前所有匹配成功的各级别意图共同构成的意图。
S280,从所述意图库中获得所述用户意图对应的响应信息。
S290,判断所述响应信息是否为不能提供服务的提示信息;如果所述响应信息是不能提供服务的提示信息,则执行步骤S2100;如果所述响应信息不是不能提供服务的提示信息,则执行步骤S2110。
具体地,所述意图库中包含所有意图和响应信息的对应关系,电子设备将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图。当电子设备获得了用户意图,就知道用户需要怎样的服务,从而根据意图和响应信息的对应关系,也就是意图与提供服务的对应关系,来为用户提供相应的服务,或输出相应的响应信息。响应信息中包括:与用户意图对应的服务响应信息,以及用户意图不完整而确定的不能提供服务的提示信息。例如: 获得的用户意图为“我想”,则获得的响应信息可以是“对不起,指令不完整无法提供服务”等类似的提示信息。
S2100,确定所述拼接后的语音识别信息没有完整的语义;
S2110,确定所述拼接后的语音识别信息有完整的语义。
具体地,在获得了用户意图后,电子设备可从意图库中获得该用户意图对应的响应信息。判断该响应信息是否为不能提供服务的提示信息,如果所述响应信息是不能提供服务的提示信息,则确定所述拼接后的语音识别信息没有完整的语义;如果所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
可选地,当用户长时间犹豫不决,不能想到请求的服务内容时,电子设备还可以带有提醒功能,因此,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,所述方法还包括:
若在第一预设时长达到时,未获得待识别语音信息,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
本实施例中,第一预设时长可以作为从用户发出语音信息的开始时刻到当前时刻之间的时间长度的衡量标准。当从用户发出语音信息的开始时刻到当前时刻的时间长度达到第一预设时长时,电子设备未获得待识别语音信息,则说明用户为了想说一句完整的话而犹豫了较长的时间。例如,用户发出的语音信息为“我想听…那个…嗯……”,则此时的第一预设时长可以为从“我想听”的开始时刻到“嗯”之后时刻,即当前时刻的时间长度的衡量标准。
另外,第一预设时长还可以作为用户从上一次发出语音信息的时刻到当前时刻之间的时间长度的衡量标准。当用户从上一次发出语音信息的时刻到当前时刻之间的时间长度达到第一预设时长时,电子设备未获得待识别语音信息,说明用户为了想说一句话中的部分词而犹豫了较长的时间。例如,用户发出的语音信息为“我想听…那个…嗯……”,则此时的第一预设时长可以为从用户发出“嗯”的开始时刻到之后时刻,即当前时刻的时间长度的衡量标准。
第一预设时长可以自由设定,第一预设时长越长说明服务器可等待用户说话犹豫的时间越长。优选地,第一预设时长可以为4秒。
具体地,当电子设备在第一预设时长达到时,未获得待识别语音信息,此时电子设备可能已保存过待拼接语音识别信息,说明用户并没有说完一句完整的话或没有说出一句可被识别为有完整语义的话,电子设备无法做出针对性的处理响应,则电子设备可以对已保存的待拼接语音识别信息进行语义解析获得语义解析结果,并向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
举例而言,电子设备中可以预设关于“我想听”的服务提示语音信息,该服务提示语音信息可以为“您是想听歌吗,请您这样告诉我‘我想听忘情水’”,当用户发出的语音为“我想听…那个…嗯……”,电子设备得到的待拼接语音识别信息为“我想听+那个+嗯”,当电子设备在第一预设时长达到时,未获得待识别语音信息时,会向用户输出上述服务提示语音信息。
本实施例中,当用户长时间犹豫不决时,电子设备还可以带有服务提示功能,增加了电子设备的智能性,提高了用户的体验。
可选地,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,所述方法还包括:
若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
具体地,当电子设备在第一预设时长达到时,未获得待识别语音信息,说明此时用户很长时间无法说出具体的服务内容,用户很可能由于想不到具体的服务内容而不会再继续发出语音,电子设备也就无需继续等待可能到来的待识别语音信息,则电子设备可以向用户输出语音识别失败提示语音信息。举例而言,语音识别失败提示语音信息可以为“对不起,我没听懂”、“请您重新说一遍”、“您需要什么服务”等。
进一步的,为了节省能耗,电子设备在向用户输出语音识别失败提示语音信息后还可以进入低功耗待机状态。
本实施例中,电子设备还可以带有输出语音识别失败提示语音信息的功 能,增加了电子设备的智能性,提高了用户的体验。
需要说明的是,在确定了语音识别结果后,电子设备可以对语音识别结果进行语义解析,根据语义解析确定为用户提供对应的服务。
举例而言,若电子设备为智能设备,在确定了语音识别结果后,智能设备可以对语音识别结果进行语义解析,根据语义解析确定为用户提供对应的服务。假设解析结果是播放智能设备中音频的指令,则执行该指令,播放相应的音频。
再例如:若电子设备为与智能设备通信连接的云端服务器,在确定了语音识别结果后,云端服务器可以对语音识别结果进行语义解析,根据语义解析确定为用户提供对应的服务。假设解析结果是播放云端服务器中音频的指令,则执行该指令,将相应的音频发送至所述智能设备,以使智能设备播放该音频。
与方法实施例相对应的,本申请还提供了一种语音识别装置,该装置可应用于电子设备。图3为本申请实施例提供的语音识别装置的结构示意图,该装置包括:
获取模块310,用于获得待识别语音信息;
识别模块320,用于对所述待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;
第一判断模块330,用于判断是否存在已保存的待拼接语音识别信息;
拼接模块340,用于当所述第一判断模块330的判断结果为存在时,对所述待拼接语音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息;
第一确定模块350,用于确定所述拼接后的语音识别信息是否有完整的语义;
第二确定模块360,用于当所述第一确定模块350的确定结果为是时,则将所述拼接后的语音识别信息确定为语音识别结果;
第三确定模块370,用于当所述第一确定模块350的确定结果为否时,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并触发所述获取模块310。
本申请实施例提供的语音识别装置,获得待识别语音信息;对待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;判断是否存在已保存的待拼接语音识别信息;如果存在,对待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息;确定拼接后的语音识别信息是否有完整的语义;如果是,则将拼接后的语音识别信息确定为语音识别结果;如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续获得待识别语音信息。
本申请实施例中,通过对不存在已保存的待拼接语音识别信息情况下的当前语音识别信息进行完整的语义的判断,以及对已保存的待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息,并判断其是否有完整的语义。如果没有,则继续获得语音信息,进而继续对语音识别信息进行拼接,直至得到完整的语义;本申请实施例保证了识别出的语义的完整性,提高了不连贯语音的语音识别效果。
进一步地,所述装置还包括:
第二判断模块380,用于当所述第一判断模块370判断出不存在已保存的待拼接语音识别信息时,则判断当前语音识别信息是否有完整的语义;
第四确定模块390,用于当所述第二判断模块380的判断结果为有时,则将当前语音识别信息确定为语音识别结果;
第五确定模块3100,用于当所述第二判断模块380的判断结果为没有时,则将当前语音识别信息确定为待拼接语音识别信息进行保存,并触发所述获取模块310。
图4为本申请实施例提供的第一确定模块的结构示意图,所述第一确定模块350,包括:
解析单元351,用于对所述拼接后的语音识别信息进行语义解析,获得语义解析结果;
匹配单元352,用于将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图;
获取单元352,用于从所述意图库中获得所述用户意图对应的响应信息;
判断单元354,用于判断所述响应信息是否为不能提供服务的提示信息;
第一确定单元355,用于当判断单元的判断出所述响应信息是不能提供服务的提示信息时,则确定所述拼接后的语音识别信息没有完整的语义;
第二确定单元356,用于当判断单元的判断出所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
本实施例中,通过将所述语义解析结果与预设的意图库中存储的意图进行匹配获得用户意图,并从所述意图库中获得所述用户意图对应的响应信息,判断所述响应信息是否为不能提供服务的提示信息,从而确定语音识别信息是否有完整语义。本实施例可使确定语音识别信息是否有完整语义的过程更加易于实现。
进一步地,所述的意图库为树形结构意图库;
所述解析单元351,具体用于按预设规则,从所述语音识别信息中提取多段特征文字,其中各段特征文字与预设的树形结构意图库中的各个级别一一对应;
所述匹配单元352,包括:
第一确定子单元3521,用于将第一级别对应的特征文字确定为当前级别的特征文字;
第二确定子单元3522,用于将所述树形结构意图库中第一级别的所有意图确定为候选意图;
匹配子单元3523,用于将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图;
判断子单元3524,用于判断是否所有特征文字匹配完成;
第三确定子单元3525,用于当所述判断子单元的判断结果为是时,则将 当前意图确定为用户意图;
第四确定子单元3526,用于当所述判断子单元的判断结果为否时,则将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;触发所述匹配子单元3523。
进一步地,所述装置还包括:第三判断模块3110,用于判断在第一预设时长达到时,是否获得待识别语音信息;
解析模块3120,用于当所述第三判断模块判断出在第一预设时长达到时,未获得待识别语音信息时,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;
第一输出模块,用于向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
进一步地,所述装置还包括:
第二输出模块,用于若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
进一步地,所述电子设备为智能设备;
所述获取模块,包括:
检测单元,用于实时检测语音信息;
第三确定单元,用于在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
进一步地,所述电子设备为与智能设备通信连接的云端服务器;
所述获取模块,具体用于接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
与上述方法实施例相对应的,本申请实施例还提供了一种电子设备。图5为本申请实施例提供的电子设备的结构示意图,所述电子设备包括:
壳体510、处理器520、存储器530、电路板540和电源电路550,其中,电路板540安置在壳体510围成的空间内部,处理器520和存储器530设置在电路板540上;电源电路550,用于为电子设备的各个电路或器件供电;存储器530用于存储可执行程序代码;处理器520通过读取存储器530中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行上述方法实施例中所述的语音识别方法。
一种实现方式中,上述语音识别方法可以包括:
获得待识别语音信息;
对所述待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;
判断是否存在已保存的待拼接语音识别信息;
如果存在,对所述待拼接语音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息;
确定所述拼接后的语音识别信息是否有完整的语义;
如果是,则将所述拼接后的语音识别信息确定为语音识别结果;
如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
上述语音识别方法的其他实现方式参见前述方法实施例部分的说明,这里不再赘述。
处理器520对上述步骤及上述语音信号处理方法的其他实现方式的具体执行过程以及处理器520通过运行可执行程序代码来进一步执行的过程,可以参见本申请实施例中图1至图4所示实施例的描述,在此不再赘述。
需要说明的是,该电子设备以多种形式存在,包括但不限于:
(1)移动通信设备:这类设备的特点是具备移动通信功能,并且以提供话 音、数据通信为主要目标。这类终端包括:智能手机(例如iPhone)、多媒体手机、功能性手机,以及低端手机等。
(2)超移动个人计算机设备:这类设备属于个人计算机的范畴,有计算和处理功能,一般也具备移动上网特性。这类终端包括:PDA、MID和UMPC设备等,例如iPad。
(3)便携式娱乐设备:这类设备可以显示和播放多媒体内容。该类设备包括:音频、视频播放器(例如iPod),掌上游戏机,电子书,以及智能玩具和便携式车载导航设备。
(4)服务器:提供计算服务的设备,服务器的构成包括处理器、硬盘、内存、系统总线等,服务器和通用的计算机架构类似,但是由于需要提供高可靠的服务,因此在处理能力、稳定性、可靠性、安全性、可扩展性、可管理性等方面要求较高。
(5)其他具有数据交互功能的电子装置。
可见,本申请实施例所提供的方案中,电子设备的处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,获得待识别语音信息;对待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;判断是否存在已保存的待拼接语音识别信息;如果存在,对待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息;确定拼接后的语音识别信息是否有完整的语义;如果是,则将拼接后的语音识别信息确定为语音识别结果;如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续获得待识别语音信息。
本申请实施例中,通过对不存在已保存的待拼接语音识别信息情况下的当前语音识别信息进行完整的语义的判断,以及对已保存的待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息,并判断其是否有完整的语义。如果没有,则继续获得语音信息,进而继续对语音识别信息进行拼接,直至得到完整的语义;本申请实施例保证了识别出的语义的完整性,提高了不连贯语音的语音识别效果。
其中,上述方法还可以包括:
若判断出不存在已保存的待拼接语音识别信息,则判断当前语音识别信息是否有完整的语义;
如果有,则将当前语音识别信息确定为语音识别结果;
如果没有,则将当前语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
其中,上述确定所述拼接后的语音识别信息是否有完整的语义的步骤,可以包括:
对所述拼接后的语音识别信息进行语义解析,获得语义解析结果;
将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图;
从所述意图库中获得所述用户意图对应的响应信息;
判断所述响应信息是否为不能提供服务的提示信息;
如果所述响应信息是不能提供服务的提示信息,则确定所述拼接后的语音识别信息没有完整的语义;
如果所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
其中,上述的意图库可以为树形结构意图库;
所述对所述拼接后的语音识别信息进行语义解析,获得语义解析结果的步骤,包括;
按预设规则,从所述语音识别信息中提取多段特征文字,其中各段特征文字与预设的树形结构意图库中的各个级别一一对应;
所述将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图的步骤,包括:
将第一级别对应的特征文字确定为当前级别的特征文字;
将所述树形结构意图库中第一级别的所有意图确定为候选意图;
将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图;
判断是否所有特征文字匹配完成;
若是,则将当前意图确定为用户意图;
若否,则将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;
返回所述将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图的步骤。
其中,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,上述方法还可以包括:
若在第一预设时长达到时,未获得待识别语音信息,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;
向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
其中,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,上述方法还可以包括:
若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
其中,上述电子设备可以为智能设备;
上述获得待识别语音信息的步骤,可以包括:
实时检测语音信息;
在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
其中,上述电子设备为可以与智能设备通信连接的云端服务器;
上述获得待识别语音信息的步骤,包括:接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测 到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
本发明实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现以下步骤:
获得待识别语音信息;
对所述待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;
判断是否存在已保存的待拼接语音识别信息;
如果存在,对所述待拼接语音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息;
确定所述拼接后的语音识别信息是否有完整的语义;
如果是,则将所述拼接后的语音识别信息确定为语音识别结果;
如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
可见,本发明实施例所提供的方案中,计算机程序被处理器执行时,通过对不存在已保存的待拼接语音识别信息情况下的当前语音识别信息进行完整的语义的判断,以及对已保存的待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息,并判断其是否有完整的语义。如果没有,则继续获得语音信息,进而继续对语音识别信息进行拼接,直至得到完整的语义;本申请实施例保证了识别出的语义的完整性,提高了不连贯语音的语音识别效果。
其中,上述方法还可以包括:
若判断出不存在已保存的待拼接语音识别信息,则判断当前语音识别信息是否有完整的语义;
如果有,则将当前语音识别信息确定为语音识别结果;
如果没有,则将当前语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
其中,上述确定所述拼接后的语音识别信息是否有完整的语义的步骤,可以包括:
对所述拼接后的语音识别信息进行语义解析,获得语义解析结果;
将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图;
从所述意图库中获得所述用户意图对应的响应信息;
判断所述响应信息是否为不能提供服务的提示信息;
如果所述响应信息是不能提供服务的提示信息,则确定所述拼接后的语音识别信息没有完整的语义;
如果所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
其中,上述的意图库可以为树形结构意图库;
所述对所述拼接后的语音识别信息进行语义解析,获得语义解析结果的步骤,包括;
按预设规则,从所述语音识别信息中提取多段特征文字,其中各段特征文字与预设的树形结构意图库中的各个级别一一对应;
所述将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图的步骤,包括:
将第一级别对应的特征文字确定为当前级别的特征文字;
将所述树形结构意图库中第一级别的所有意图确定为候选意图;
将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图;
判断是否所有特征文字匹配完成;
若是,则将当前意图确定为用户意图;
若否,则将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;
返回所述将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图的步骤。
其中,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,上述方法还可以包括:
若在第一预设时长达到时,未获得待识别语音信息,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;
向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
其中,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,上述方法还可以包括:
若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
其中,上述计算机可读存储介质为智能设备的可读存储介质;
上述获得待识别语音信息的步骤,可以包括:
实时检测语音信息;
在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
其中,上述计算机可读存储介质为与智能设备通信连接的云端服务器的可读存储介质;
上述获得待识别语音信息的步骤,包括:接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
本发明实施例还提供了一种应用程序,该应用程序用于在运行时执行本申请实施例提供的用户注册方法。该应用程序被处理器执行时实现以下步骤:
获得待识别语音信息;
对所述待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;
判断是否存在已保存的待拼接语音识别信息;
如果存在,对所述待拼接语音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息;
确定所述拼接后的语音识别信息是否有完整的语义;
如果是,则将所述拼接后的语音识别信息确定为语音识别结果;
如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
可见,本发明实施例所提供的方案中,应用程序被处理器执行时,通过对不存在已保存的待拼接语音识别信息情况下的当前语音识别信息进行完整的语义的判断,以及对已保存的待拼接语音识别信息和当前语音识别信息进行拼接,得到拼接后的语音识别信息,并判断其是否有完整的语义。如果没有,则继续获得语音信息,进而继续对语音识别信息进行拼接,直至得到完整的语义;本申请实施例保证了识别出的语义的完整性,提高了不连贯语音的语音识别效果。
其中,上述方法还可以包括:
若判断出不存在已保存的待拼接语音识别信息,则判断当前语音识别信息是否有完整的语义;
如果有,则将当前语音识别信息确定为语音识别结果;
如果没有,则将当前语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
其中,上述确定所述拼接后的语音识别信息是否有完整的语义的步骤, 可以包括:
对所述拼接后的语音识别信息进行语义解析,获得语义解析结果;
将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图;
从所述意图库中获得所述用户意图对应的响应信息;
判断所述响应信息是否为不能提供服务的提示信息;
如果所述响应信息是不能提供服务的提示信息,则确定所述拼接后的语音识别信息没有完整的语义;
如果所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
其中,上述的意图库可以为树形结构意图库;
所述对所述拼接后的语音识别信息进行语义解析,获得语义解析结果的步骤,包括;
按预设规则,从所述语音识别信息中提取多段特征文字,其中各段特征文字与预设的树形结构意图库中的各个级别一一对应;
所述将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图的步骤,包括:
将第一级别对应的特征文字确定为当前级别的特征文字;
将所述树形结构意图库中第一级别的所有意图确定为候选意图;
将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图;
判断是否所有特征文字匹配完成;
若是,则将当前意图确定为用户意图;
若否,则将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;
返回所述将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图的步骤。
其中,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,上述方法还可以包括:
若在第一预设时长达到时,未获得待识别语音信息,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;
向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
其中,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,上述方法还可以包括:
若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
其中,上述应用程序为智能设备中存储的;
上述获得待识别语音信息的步骤,可以包括:
实时检测语音信息;
在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
其中,上述应用程序为与智能设备通信连接的云端服务器中存储的;
上述获得待识别语音信息的步骤,可以包括:接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
对于装置、电子设备、计算机可读存储介质及应用程序实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
需要说明的是,在本文中,诸如第一和第二等之类的关系术语仅仅用来 将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
本说明书中的各个实施例均采用相关的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于系统实施例而言,由于其基本相似于方法实施例,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
以上所述仅为本申请的较佳实施例而已,并非用于限定本申请的保护范围。凡在本申请的精神和原则之内所作的任何修改、等同替换、改进等,均包含在本申请的保护范围内。

Claims (19)

  1. 一种语音识别方法,其特征在于,应用于电子设备,所述方法包括:
    获得待识别语音信息;
    对所述待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;
    判断是否存在已保存的待拼接语音识别信息;
    如果存在,对所述待拼接语音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息;
    确定所述拼接后的语音识别信息是否有完整的语义;
    如果是,则将所述拼接后的语音识别信息确定为语音识别结果;
    如果否,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
  2. 根据权利要求1所述的方法,其特征在于,所述方法还包括:
    若判断出不存在已保存的待拼接语音识别信息,则判断当前语音识别信息是否有完整的语义;
    如果有,则将当前语音识别信息确定为语音识别结果;
    如果没有,则将当前语音识别信息确定为待拼接语音识别信息进行保存,并继续执行所述获得待识别语音信息的步骤。
  3. 根据权利要求1所述的方法,其特征在于,所述确定所述拼接后的语音识别信息是否有完整的语义的步骤,包括:
    对所述拼接后的语音识别信息进行语义解析,获得语义解析结果;
    将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图;
    从所述意图库中获得所述用户意图对应的响应信息;
    判断所述响应信息是否为不能提供服务的提示信息;
    如果所述响应信息是不能提供服务的提示信息,则确定所述拼接后的语音识别信息没有完整的语义;
    如果所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
  4. 根据权利要求3所述的方法,其特征在于,所述的意图库为树形结构意图库;
    所述对所述拼接后的语音识别信息进行语义解析,获得语义解析结果的步骤,包括;
    按预设规则,从所述语音识别信息中提取多段特征文字,其中各段特征文字与预设的树形结构意图库中的各个级别一一对应;
    所述将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图的步骤,包括:
    将第一级别对应的特征文字确定为当前级别的特征文字;
    将所述树形结构意图库中第一级别的所有意图确定为候选意图;
    将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图;
    判断是否所有特征文字匹配完成;
    若是,则将当前意图确定为用户意图;
    若否,则将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;
    返回所述将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图的步骤。
  5. 根据权利要求1所述的方法,其特征在于,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,所述方法还包括:
    若在第一预设时长达到时,未获得待识别语音信息,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;
    向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
  6. 根据权利要求1所述的方法,其特征在于,在将拼接后的语音识别信息确定为待拼接语音识别信息进行保存后,所述方法还包括:
    若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
  7. 根据权利要求1~6任一项所述的方法,其特征在于,所述电子设备为智能设备;
    所述获得待识别语音信息的步骤,包括:
    实时检测语音信息;
    在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
  8. 根据权利要求1~6任一项所述的方法,其特征在于,所述电子设备为与智能设备通信连接的云端服务器;
    所述获得待识别语音信息的步骤,包括:接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
  9. 一种语音识别装置,其特征在于,应用于电子设备,所述装置包括:
    获取模块,用于获得待识别语音信息;
    识别模块,用于对所述待识别语音信息进行识别,获得该待识别语音信息对应的当前语音识别信息;
    第一判断模块,用于判断是否存在已保存的待拼接语音识别信息;
    拼接模块,用于当所述判断模块的判断结果为存在时,对所述待拼接语音识别信息和所述当前语音识别信息进行拼接,得到拼接后的语音识别信息;
    第一确定模块,用于确定所述拼接后的语音识别信息是否有完整的语义;
    第二确定模块,用于当所述第一确定模块的确定结果为是时,则将所述拼接后的语音识别信息确定为语音识别结果;
    第三确定模块,用于当所述第一确定模块的确定结果为否时,将拼接后的语音识别信息确定为待拼接语音识别信息进行保存,并触发所述获取模块。
  10. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    第二判断模块,用于当所述第一判断模块判断出不存在已保存的待拼接语音识别信息时,则判断当前语音识别信息是否有完整的语义;
    第四确定模块,用于当所述第二判断模块的判断结果为有时,则将当前语音识别信息确定为语音识别结果;
    第五确定模块,用于当所述第二判断模块的判断结果为没有时,则将当前语音识别信息确定为待拼接语音识别信息进行保存,并触发所述获取模块。
  11. 根据权利要求9所述的装置,其特征在于,所述第一确定模块,包括:
    解析单元,用于对所述拼接后的语音识别信息进行语义解析,获得语义解析结果;
    匹配单元,用于将所述语义解析结果与预设的意图库中存储的意图进行匹配,获得用户意图;
    获取单元,用于从所述意图库中获得所述用户意图对应的响应信息;
    判断单元,用于判断所述响应信息是否为不能提供服务的提示信息;
    第一确定单元,用于当判断单元的判断出所述响应信息是不能提供服务的提示信息时,则确定所述拼接后的语音识别信息没有完整的语义;
    第二确定单元,用于当判断单元的判断出所述响应信息不是不能提供服务的提示信息,则确定所述拼接后的语音识别信息有完整的语义。
  12. 根据权利要求11所述的装置,其特征在于,所述的意图库为树形结构意图库;
    所述解析单元,具体用于按预设规则,从所述语音识别信息中提取多段特征文字,其中各段特征文字与预设的树形结构意图库中的各个级别一一对 应;
    所述匹配单元,包括:
    第一确定子单元,用于将第一级别对应的特征文字确定为当前级别的特征文字;
    第二确定子单元,用于将所述树形结构意图库中第一级别的所有意图确定为候选意图;
    匹配子单元,用于将所述当前级别的特征文字与所述各个候选意图进行匹配,获得当前意图;
    判断子单元,用于判断是否所有特征文字匹配完成;
    第三确定子单元,用于当所述判断子单元的判断结果为是时,则将当前意图确定为用户意图;
    第四确定子单元,用于当所述判断子单元的判断结果为否时,则将下一级别对应的特征文字确定为当前级别的特征文字;将所述树形结构意图库中当前意图对应的下一级别的所有意图确定为候选意图;触发所述匹配子单元。
  13. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    解析模块,用于若在第一预设时长达到时,未获得待识别语音信息,则对已保存的待拼接语音识别信息进行语义解析,获得语义解析结果;
    第一输出模块,用于向用户输出预设的与所述语义解析结果对应的服务提示语音信息。
  14. 根据权利要求9所述的装置,其特征在于,所述装置还包括:
    第二输出模块,用于若在第一预设时长达到时,未获得待识别语音信息,则向用户输出语音识别失败提示语音信息。
  15. 根据权利要求9~14任一项所述的装置,其特征在于,所述电子设备为智能设备;
    所述获取模块,包括:
    检测单元,用于实时检测语音信息;
    第三确定单元,用于在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息。
  16. 根据权利要求9~14任一项所述的装置,其特征在于,所述电子设备为与智能设备通信连接的云端服务器;
    所述获取模块,具体用于接收所述智能设备发送的待识别语音信息;所述智能设备发送的待识别语音信息为:所述智能设备在检测到用户输入语音信息后,当静音时长达到第二预设时长时,将用户输入的语音信息确定为待识别语音信息后发送至所述云端服务器的。
  17. 一种电子设备,其特征在于,所述电子设备包括:壳体、处理器、存储器、电路板和电源电路,其中,电路板安置在壳体围成的空间内部,处理器和存储器设置在电路板上;电源电路,用于为电子设备的各个电路或器件供电;存储器用于存储可执行程序代码;处理器通过读取存储器中存储的可执行程序代码来运行与可执行程序代码对应的程序,以用于执行权利要求1~8中任一项所述的语音识别方法。
  18. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质内存储有计算机程序,所述计算机程序被处理器执行时实现权利要求1~8中任一项所述的用户注册方法。
  19. 一种应用程序,其特征在于,所述应用程序用于在运行时执行权利要求1~8中任一项所述的用户注册方法。
PCT/CN2018/082525 2017-04-10 2018-04-10 一种语音识别方法、装置及电子设备 WO2018188591A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710229218.8 2017-04-10
CN201710229218.8A CN107146602B (zh) 2017-04-10 2017-04-10 一种语音识别方法、装置及电子设备

Publications (1)

Publication Number Publication Date
WO2018188591A1 true WO2018188591A1 (zh) 2018-10-18

Family

ID=59773625

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/082525 WO2018188591A1 (zh) 2017-04-10 2018-04-10 一种语音识别方法、装置及电子设备

Country Status (2)

Country Link
CN (1) CN107146602B (zh)
WO (1) WO2018188591A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393845A (zh) * 2021-06-11 2021-09-14 上海明略人工智能(集团)有限公司 用于说话人识别的方法、装置、电子设备及可读存储介质

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107146602B (zh) * 2017-04-10 2020-10-02 北京猎户星空科技有限公司 一种语音识别方法、装置及电子设备
CN107886944B (zh) * 2017-11-16 2021-12-31 出门问问创新科技有限公司 一种语音识别方法、装置、设备及存储介质
CN108847236A (zh) * 2018-07-26 2018-11-20 珠海格力电器股份有限公司 语音信息的接收方法及装置、语音信息的解析方法及装置
CN108847237A (zh) * 2018-07-27 2018-11-20 重庆柚瓣家科技有限公司 连续语音识别方法及系统
CN108962262B (zh) * 2018-08-14 2021-10-08 思必驰科技股份有限公司 语音数据处理方法和装置
CN109473104B (zh) * 2018-11-07 2021-11-30 思必驰科技股份有限公司 语音识别网络延时优化方法及装置
CN111627463B (zh) * 2019-02-28 2024-01-16 百度在线网络技术(北京)有限公司 语音vad尾点确定方法及装置、电子设备和计算机可读介质
CN111785259A (zh) * 2019-04-04 2020-10-16 北京猎户星空科技有限公司 信息处理方法、装置及电子设备
CN110162176B (zh) * 2019-05-20 2022-04-26 北京百度网讯科技有限公司 语音指令的挖掘方法和装置终端、计算机可读介质
CN110287303B (zh) * 2019-06-28 2021-08-20 北京猎户星空科技有限公司 人机对话处理方法、装置、电子设备及存储介质
CN110517673B (zh) * 2019-07-18 2023-08-18 平安科技(深圳)有限公司 语音识别方法、装置、计算机设备及存储介质
CN112242139B (zh) * 2019-07-19 2024-01-23 北京如布科技有限公司 语音交互方法、装置、设备和介质
CN110619873A (zh) * 2019-08-16 2019-12-27 北京小米移动软件有限公司 音频处理方法、装置及存储介质
CN112581938B (zh) * 2019-09-30 2024-04-09 华为技术有限公司 基于人工智能的语音断点检测方法、装置和设备
CN110767240B (zh) * 2019-10-31 2021-12-03 广东美的制冷设备有限公司 儿童口音识别的设备控制方法、设备、存储介质及装置
CN110808031A (zh) * 2019-11-22 2020-02-18 大众问问(北京)信息科技有限公司 一种语音识别方法、装置和计算机设备
CN112908316A (zh) * 2019-12-02 2021-06-04 浙江思考者科技有限公司 Ai智能语音流采集
CN113362828B (zh) * 2020-03-04 2022-07-05 阿波罗智联(北京)科技有限公司 用于识别语音的方法和装置
CN111402866B (zh) * 2020-03-23 2024-04-05 北京声智科技有限公司 语义识别方法、装置及电子设备
CN111916082A (zh) * 2020-08-14 2020-11-10 腾讯科技(深圳)有限公司 语音交互方法、装置、计算机设备和存储介质
CN112700769A (zh) * 2020-12-26 2021-04-23 科大讯飞股份有限公司 一种语义理解方法、装置、设备以及计算机可读存储介质
CN114078478B (zh) * 2021-11-12 2022-09-23 北京百度网讯科技有限公司 语音交互的方法、装置、电子设备及存储介质
CN114582333A (zh) * 2022-02-21 2022-06-03 中国第一汽车股份有限公司 语音识别方法、装置、电子设备及存储介质
CN114648984B (zh) * 2022-05-23 2022-08-19 深圳华策辉弘科技有限公司 音频断句方法、装置、计算机设备及存储介质
CN115512687B (zh) * 2022-11-08 2023-02-17 之江实验室 一种语音断句方法、装置、存储介质及电子设备
CN117524199B (zh) * 2024-01-04 2024-04-16 广州小鹏汽车科技有限公司 语音识别方法、装置及车辆

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7177810B2 (en) * 2001-04-10 2007-02-13 Sri International Method and apparatus for performing prosody-based endpointing of a speech signal
CN105100460A (zh) * 2015-07-09 2015-11-25 上海斐讯数据通信技术有限公司 一种声音操控智能终端的方法及系统
US20160351196A1 (en) * 2015-05-26 2016-12-01 Nuance Communications, Inc. Methods and apparatus for reducing latency in speech recognition applications
US20170069309A1 (en) * 2015-09-03 2017-03-09 Google Inc. Enhanced speech endpointing
US20170069308A1 (en) * 2015-09-03 2017-03-09 Google Inc. Enhanced speech endpointing
CN107146618A (zh) * 2017-06-16 2017-09-08 北京云知声信息技术有限公司 语音处理方法及装置
CN107146602A (zh) * 2017-04-10 2017-09-08 北京猎户星空科技有限公司 一种语音识别方法、装置及电子设备
CN107195303A (zh) * 2017-06-16 2017-09-22 北京云知声信息技术有限公司 语音处理方法及装置

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6453292B2 (en) * 1998-10-28 2002-09-17 International Business Machines Corporation Command boundary identifier for conversational natural language
JP2002041082A (ja) * 2000-07-28 2002-02-08 Hitachi Ltd 音声認識装置
JP4906379B2 (ja) * 2006-03-22 2012-03-28 富士通株式会社 音声認識装置、音声認識方法、及びコンピュータプログラム
CN103035243B (zh) * 2012-12-18 2014-12-24 中国科学院自动化研究所 长语音连续识别及识别结果实时反馈方法和系统
CN104267922B (zh) * 2014-09-16 2019-05-31 联想(北京)有限公司 一种信息处理方法及电子设备

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7177810B2 (en) * 2001-04-10 2007-02-13 Sri International Method and apparatus for performing prosody-based endpointing of a speech signal
US20160351196A1 (en) * 2015-05-26 2016-12-01 Nuance Communications, Inc. Methods and apparatus for reducing latency in speech recognition applications
CN105100460A (zh) * 2015-07-09 2015-11-25 上海斐讯数据通信技术有限公司 一种声音操控智能终端的方法及系统
US20170069309A1 (en) * 2015-09-03 2017-03-09 Google Inc. Enhanced speech endpointing
US20170069308A1 (en) * 2015-09-03 2017-03-09 Google Inc. Enhanced speech endpointing
CN107146602A (zh) * 2017-04-10 2017-09-08 北京猎户星空科技有限公司 一种语音识别方法、装置及电子设备
CN107146618A (zh) * 2017-06-16 2017-09-08 北京云知声信息技术有限公司 语音处理方法及装置
CN107195303A (zh) * 2017-06-16 2017-09-22 北京云知声信息技术有限公司 语音处理方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393845A (zh) * 2021-06-11 2021-09-14 上海明略人工智能(集团)有限公司 用于说话人识别的方法、装置、电子设备及可读存储介质

Also Published As

Publication number Publication date
CN107146602B (zh) 2020-10-02
CN107146602A (zh) 2017-09-08

Similar Documents

Publication Publication Date Title
WO2018188591A1 (zh) 一种语音识别方法、装置及电子设备
WO2018188586A1 (zh) 一种用户注册方法、装置及电子设备
US20230206940A1 (en) Method of and system for real time feedback in an incremental speech input interface
US10332524B2 (en) Speech recognition wake-up of a handheld portable electronic device
US9117449B2 (en) Embedded system for construction of small footprint speech recognition with user-definable constraints
WO2017071182A1 (zh) 一种语音唤醒方法、装置及系统
US9142219B2 (en) Background speech recognition assistant using speaker verification
KR102437944B1 (ko) 음성 웨이크업 방법 및 장치
JP2020086437A (ja) 音声認識方法及び音声認識装置
JP6926241B2 (ja) ホットワード認識音声合成
US9837069B2 (en) Technologies for end-of-sentence detection using syntactic coherence
US20150106089A1 (en) Name Based Initiation of Speech Recognition
US20130080171A1 (en) Background speech recognition assistant
GB2559643A (en) Facilitating creation and playback of user-recorded audio
CN107146605B (zh) 一种语音识别方法、装置及电子设备
JP7063937B2 (ja) 音声対話するための方法、装置、電子デバイス、コンピュータ読み取り可能な記憶媒体、及びコンピュータプログラム
TWI660341B (zh) 一種搜尋方法以及一種應用該方法的電子裝置
JP7330066B2 (ja) 音声認識装置、音声認識方法及びそのプログラム
US20180350360A1 (en) Provide non-obtrusive output

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18783964

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18783964

Country of ref document: EP

Kind code of ref document: A1