CN113362824A - Voice recognition method and device and terminal equipment - Google Patents

Voice recognition method and device and terminal equipment Download PDF

Info

Publication number
CN113362824A
CN113362824A CN202110642269.XA CN202110642269A CN113362824A CN 113362824 A CN113362824 A CN 113362824A CN 202110642269 A CN202110642269 A CN 202110642269A CN 113362824 A CN113362824 A CN 113362824A
Authority
CN
China
Prior art keywords
output voice
output
voice
speech
integrity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110642269.XA
Other languages
Chinese (zh)
Other versions
CN113362824B (en
Inventor
皮碧虹
杨德文
龙丁奋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tongxingzhe Technology Co ltd
Original Assignee
Shenzhen Tongxingzhe Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tongxingzhe Technology Co ltd filed Critical Shenzhen Tongxingzhe Technology Co ltd
Priority to CN202110642269.XA priority Critical patent/CN113362824B/en
Publication of CN113362824A publication Critical patent/CN113362824A/en
Application granted granted Critical
Publication of CN113362824B publication Critical patent/CN113362824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/04Segmentation; Word boundary detection
    • G10L15/05Word boundary detection
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Machine Translation (AREA)

Abstract

The invention is suitable for the technical field of voice recognition, and provides a voice recognition method, a voice recognition device and terminal equipment, wherein the method comprises the steps of collecting first output voice of a target at the current moment, and recognizing the first output voice to obtain a recognition result; judging the integrity of the first output voice according to the recognition result; scoring the complete confidence of the judgment result; and when the complete confidence coefficient meets a preset condition, responding to the first output voice. The invention can achieve better sentence-breaking effect, accurately analyze whether the user completely expresses the intention of the user and has wide application range.

Description

Voice recognition method and device and terminal equipment
Technical Field
The present invention relates to the field of speech recognition technologies, and in particular, to a speech recognition method, an apparatus, and a terminal device.
Background
In the interactive design of an AI (Artificial Intelligence) device, voice recognition is an important part, and when a user needs to interact with the AI device, the AI device first performs voice recognition on the output voice of the user. The key technology of speech recognition includes semantic recognition and sentence and word segmentation. For sentence break recognition, the interactive design of the AI device is usually performed, so that the user actively requests the sentence break to be finished, such as clicking or releasing a key; or silence detection is used, and when the user is not detected to have voice indication within a period of time, the sentence break is considered to be triggered.
However, the first method is not suitable for specific environments such as vehicle-mounted speech recognition, and the second method has difficulty in determining the time of silence detection, and may result in early termination of recognition.
Disclosure of Invention
The invention mainly aims to provide a voice recognition method, a voice recognition device and terminal equipment, and aims to solve the problems that in the voice recognition design of the conventional AI equipment, the setting application range of a sentence-breaking mode is narrow, and the sentence-breaking effect is poor.
In order to achieve the above object, a first aspect of embodiments of the present invention provides a speech recognition method, including:
acquiring first output voice of a target at the current moment, and identifying the first output voice to obtain an identification result;
judging the integrity of the first output voice according to the recognition result;
scoring the complete confidence of the judgment result;
and responding to the first output voice when the complete confidence coefficient meets a preset condition.
With reference to the first aspect of the embodiment of the present invention, in the first embodiment of the present invention, recognizing the first output voice to obtain a recognition result includes:
converting the first output voice into text information;
and performing word segmentation processing on the text information, and taking a word segmentation processing result as the recognition result.
With reference to the first aspect and the first implementation manner of the embodiments of the present invention, in a second implementation manner of the present invention, before determining the integrity of the first output speech according to the recognition result, the method includes:
acquiring a complete user corpus;
splitting the N complete user corpora, wherein k groups of split corpora are obtained based on the nth complete user corpora;
performing corpus integrity classification on each group of the split corpuses, wherein the number of the split corpuses is in direct proportion to the numerical value of the corpus integrity;
and constructing a complete user corpus based on the complete user corpora subjected to corpus integrity classification.
With reference to the second implementation manner of the first aspect of the embodiments of the present invention, in a third implementation manner of the present invention, the determining the integrity of the first output voice according to the recognition result includes:
matching the recognition result in the complete user corpus to obtain corpus integrity based on the recognition result;
and when the corpus integrity of the recognition result is greater than a preset value, the first output voice is complete.
With reference to the first aspect of the embodiments of the present invention, in a fourth embodiment of the present invention, when the integrity confidence does not meet a preset condition, a second output voice of a target at a next time is obtained, and the second output voice is recognized to obtain a supplementary recognition result;
splicing the supplementary result and the supplementary recognition result, and judging the integrality of the first output voice and the second output voice according to the splicing result;
and responding according to a splicing result when the first output voice and the second output voice are complete.
With reference to the fourth aspect of the first aspect of the embodiments of the present invention, in a fifth aspect of the present invention, when the first output speech and the second output speech are incomplete, a recognition result based on the first output speech is deleted.
With reference to the fourth embodiment of the first aspect of the present invention, in a sixth embodiment of the present invention, if a second output voice of a target at a next time cannot be acquired, the first output voice and a recognition result based on the first output voice are deleted.
A second aspect of an embodiment of the present invention provides a speech recognition apparatus, including:
the first output voice acquisition module is used for acquiring first output voice of a target at the current moment and identifying the first output voice to obtain an identification result;
the integrity judging module is used for judging the integrity of the first output voice according to the recognition result;
the confidence degree scoring module is used for scoring the complete confidence degree of the judgment result;
and the voice response module is used for responding to the first output voice when the complete confidence coefficient meets a preset condition.
A third aspect of embodiments of the present invention provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor implements the steps of the method provided in the first aspect when executing the computer program.
A fourth aspect of embodiments of the present invention provides a computer-readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method as provided in the first aspect above.
The embodiment of the invention provides a voice recognition method, which takes a user as a target, collects first output voice of the target at the current moment, and carries out integrity judgment and integrity confidence analysis on the first output voice so as to analyze whether the user completely expresses the intention of the user; compared with a sentence-breaking mode of silence detection, the method avoids the situation that the time judgment of silence detection is difficult to set, so that the voice recognition is ended in advance and the complete intention of a user cannot be recognized.
Drawings
Fig. 1 is a schematic flow chart illustrating an implementation of a speech recognition method according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a structure of a speech recognition apparatus according to an embodiment of the present invention.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
Suffixes such as "module", "part", or "unit" used to denote elements are used herein only for the convenience of description of the present invention, and have no specific meaning in themselves. Thus, "module" and "component" may be used in a mixture.
As shown in fig. 1, an embodiment of the present invention provides a speech recognition method applied to an AI device, which includes, but is not limited to, the following steps:
s101, collecting first output voice of a target at the current moment, and recognizing the first output voice to obtain a recognition result.
In the above step S101, the target is the user who uses the AI device currently, and it is conceivable that before the first output voice of the target at the current time is collected, whether the current target is the user having the usage right should be identified.
In a specific application, such as in an environment of vehicle-mounted voice recognition, a user having a usage right may be defined as a driver, and in an environment of home voice recognition, a user having a usage right may be a user using a preset instruction sentence.
In the above step S101, the recognition of the first output voice is natural voice recognition, that is, the first output voice is converted into text information recognizable by a machine.
In one embodiment, one implementation manner of the step S101 may be:
converting the first output voice into text information;
and performing word segmentation processing on the text information, and taking a word segmentation processing result as the recognition result.
In the embodiment of the present invention, on one hand, understanding of a sentence by a user is simulated through word segmentation processing to achieve an effect of semantic analysis, and on the other hand, as shown in the following step S102, the result is used for judging the integrity of the first output speech.
And S102, judging the integrity of the first output voice according to the recognition result.
In the step S102, information about whether the target sentence break is completed is obtained by determining the integrity of the first output voice, and in detail, when the first output voice is determined to be complete, the target sentence break is completed, that is, the user has given a complete voice instruction; and when the first output voice is judged to be incomplete, the target sentence break is not finished, namely, the user does not give a complete voice instruction.
In the embodiment of the present invention, the recognition result is text information subjected to word segmentation, and when the integrity of the first output speech is determined according to the recognition result, the integrity of the recognition result is evaluated by using a complete user corpus, and before the step S102 is implemented, the complete user corpus is further constructed, which is implemented in a manner that:
before judging the integrity of the first output voice according to the recognition result, the method comprises the following steps:
acquiring a complete user corpus;
splitting the N complete user corpora, wherein k groups of split corpora are obtained based on the nth complete user corpora;
performing corpus integrity classification on each group of the split corpuses, wherein the number of the split corpuses is in direct proportion to the numerical value of the corpus integrity;
and constructing a complete user corpus based on the complete user corpora subjected to corpus integrity classification.
In this embodiment, the complete user corpus is described with specific data, and assuming that there is a complete user corpus navigated to the location a, the location B, and the location C, the corpus is split, so that 3 groups of split corpora based on the 1 st complete user corpus can be obtained, which is represented as: { navigated to, a location }, { navigated to }, { a location }; and 3 groups of split corpora based on the 2 nd complete user corpus, denoted as: { navigated to, B location }, { navigated to }, { B location }; and 3 groups of split corpora based on the 3 rd complete user corpus, expressed as: { navigate to, C location }, { navigate to }, and { C location }. And wherein the value of the corpus integrity of { navigate to, a location }, { navigate to, B location }, { navigate to, C location } may be assumed to be 2, and the value of the corpus integrity of { navigate to }, { a location }, { B location }, { C location } may be assumed to be 1.
Based on the complete user corpus, the implementation manner of the step S102 includes:
matching the recognition result in the complete user corpus to obtain corpus integrity based on the recognition result;
and when the corpus integrity of the recognition result is greater than a preset value, the first output voice is complete.
Through the matching, the corpus integrity of the recognition result can be obtained, and the preset numerical value is set to be 1 in combination with the specific data, because the { navigation to }, { A place }, { B place }, and { C place } in the split corpus obtained by splitting the complete user corpus are incomplete corpuses, and the numerical value of the corpus integrity is 1. Then, when the first output speech is navigated, according to the step S101 and the detailed implementation thereof, the recognition result is { navigated to }, and the first output speech is matched in the complete user corpus, and the matching results between the first output speech and the 3 complete user corpora in the complete corpus database all indicate that the corpus integrity value is 1, so that it can be determined that the first output speech is incomplete. When the first output speech is the navigation to the location a, according to the step S101 and the detailed implementation manner thereof, the recognition result is { navigation to, location a }, and the recognition result is matched in the complete user corpus, and in the complete corpus database, the matching results of the 3 complete user corpora indicate that the corpus integrity values of the corpus are 2, 1, 1, respectively, and the mean value is greater than 1, so that it can be determined that the first output speech is complete at this time.
And S103, scoring the complete confidence degree of the judgment result.
And S104, responding to the first output voice when the complete confidence coefficient meets a preset condition.
Through the above steps S101 to S104, the integrity of the first output voice is analyzed and the integrity confidence is determined, so as to analyze whether the user has completely expressed his intention, and if so, the AI device responds. Compared with the mode that the user actively requests the sentence break to end, the voice recognition method does not need the manual interactive operation of the user, is not limited by the application scene, and has wide application range; compared with a sentence-breaking mode of silence detection, the method avoids the situation that the time judgment of silence detection is difficult to set, so that the voice recognition is ended in advance and the complete intention of a user cannot be recognized.
In a specific application, if the complete confidence does not satisfy the preset condition, that is, the judgment result indicates that the first output voice is incomplete, the user can continue to wait for the complete expression, for example, continue to acquire the target output voice and perform recognition and integrity analysis according to the complete output voice.
Therefore, in one embodiment, when the complete confidence does not satisfy the preset condition, a second output voice of the target at the next moment is acquired, and the second output voice is recognized to obtain a supplementary recognition result;
splicing the supplementary result and the supplementary recognition result, and judging the integrality of the first output voice and the second output voice according to the splicing result;
and responding according to a splicing result when the first output voice and the second output voice are complete.
The interval between the first output voice and the second output voice does not influence whether the first output voice and the second output voice are taken as complete output voices or not, so that the problem that recognition is finished in advance due to the fact that sentence break is achieved through silence detection is solved.
In one embodiment, the recognition result based on the first output voice is deleted when the first output voice and the second output voice are incomplete.
In an embodiment of the present invention, the first output voice and the second output voice are incomplete, which indicates that the first output voice and the second output voice cannot be regarded as complete output voices. At this time, the recognition result based on the first output voice is deleted to make the second output voice as a new first output voice, and the process returns to the step S101, and the voice recognition is re-performed through the steps S101 to S104,
in one embodiment, if a second output voice of the target at the next time cannot be acquired, the first output voice and a recognition result based on the first output voice are acquired.
In a specific application, the condition that the second output voice of the target at the next time cannot be acquired may be that the output voice of the target is not acquired within a preset time, and a time before a time obtained by adding the preset time to the current time may be the next time.
The preset time is at least one hour, or other time intervals different from silence detection phrases, which is not limited in the embodiment of the present invention.
As shown in fig. 2, an embodiment of the present invention provides a speech recognition apparatus 20, including:
the first output voice acquisition module 21 is configured to acquire a first output voice of a target at a current moment, and recognize the first output voice to obtain a recognition result;
an integrity judgment module 22, configured to judge integrity of the first output voice according to the recognition result;
a confidence score module 23, configured to perform complete confidence score on the determination result;
and the voice response module 24 is configured to respond to the first output voice when the complete confidence coefficient meets a preset condition.
The embodiment of the present invention further provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and capable of running on the processor, and when the processor executes the computer program, the steps in the voice recognition method as described in the first embodiment are implemented.
An embodiment of the present invention further provides a storage medium, which is a computer-readable storage medium, and a computer program is stored on the storage medium, and when the computer program is executed by a processor, the steps in the speech recognition method according to the first embodiment are implemented.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the foregoing embodiments illustrate the present invention in detail, those of ordinary skill in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A speech recognition method, comprising:
acquiring first output voice of a target at the current moment, and identifying the first output voice to obtain an identification result;
judging the integrity of the first output voice according to the recognition result;
scoring the complete confidence of the judgment result;
and responding to the first output voice when the complete confidence coefficient meets a preset condition.
2. The speech recognition method of claim 1, wherein recognizing the first output speech to obtain a recognition result comprises:
converting the first output voice into text information;
and performing word segmentation processing on the text information, and taking a word segmentation processing result as the recognition result.
3. The speech recognition method of claim 2, wherein before determining the integrity of the first output speech based on the recognition result, comprising:
acquiring a complete user corpus;
splitting the N complete user corpora, wherein k groups of split corpora are obtained based on the nth complete user corpora;
performing corpus integrity classification on each group of the split corpuses, wherein the number of the split corpuses is in direct proportion to the numerical value of the corpus integrity;
and constructing a complete user corpus based on the complete user corpora subjected to corpus integrity classification.
4. The speech recognition method of claim 3, wherein determining the integrity of the first output speech based on the recognition result comprises:
matching the recognition result in the complete user corpus to obtain corpus integrity based on the recognition result;
and when the corpus integrity of the recognition result is greater than a preset value, the first output voice is complete.
5. The voice recognition method according to claim 1, wherein when the complete confidence does not satisfy a preset condition, a second output voice of a target at a next moment is obtained, and the second output voice is recognized to obtain a supplementary recognition result;
splicing the supplementary result and the supplementary recognition result, and judging the integrality of the first output voice and the second output voice according to the splicing result;
and responding according to a splicing result when the first output voice and the second output voice are complete.
6. The speech recognition method of claim 5, wherein a recognition result based on the first output speech is deleted when the first output speech and the second output speech are incomplete.
7. The speech recognition method according to claim 5, wherein if a second output speech of the target at the next time cannot be acquired, the first output speech and the recognition result based on the first output speech are deleted.
8. A speech recognition apparatus, comprising:
the first output voice acquisition module is used for acquiring first output voice of a target at the current moment and identifying the first output voice to obtain an identification result;
the integrity judging module is used for judging the integrity of the first output voice according to the recognition result;
the confidence degree scoring module is used for scoring the complete confidence degree of the judgment result;
and the voice response module is used for responding to the first output voice when the complete confidence coefficient meets a preset condition.
9. A terminal device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the steps of the speech recognition method according to any one of claims 1 to 7 when executing the computer program.
10. A storage medium being a computer-readable storage medium having a computer program stored thereon, wherein the computer program, when being executed by a processor, carries out the steps of the speech recognition method according to any one of the claims 1 to 7.
CN202110642269.XA 2021-06-09 2021-06-09 Voice recognition method and device and terminal equipment Active CN113362824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110642269.XA CN113362824B (en) 2021-06-09 2021-06-09 Voice recognition method and device and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110642269.XA CN113362824B (en) 2021-06-09 2021-06-09 Voice recognition method and device and terminal equipment

Publications (2)

Publication Number Publication Date
CN113362824A true CN113362824A (en) 2021-09-07
CN113362824B CN113362824B (en) 2024-03-12

Family

ID=77533385

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110642269.XA Active CN113362824B (en) 2021-06-09 2021-06-09 Voice recognition method and device and terminal equipment

Country Status (1)

Country Link
CN (1) CN113362824B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170110116A1 (en) * 2015-10-19 2017-04-20 Google Inc. Speech endpointing
CN107919130A (en) * 2017-11-06 2018-04-17 百度在线网络技术(北京)有限公司 Method of speech processing and device based on high in the clouds
CN109344231A (en) * 2018-10-31 2019-02-15 广东小天才科技有限公司 A kind of method and system of the semantic incomplete corpus of completion
WO2020113918A1 (en) * 2018-12-06 2020-06-11 平安科技(深圳)有限公司 Statement rationality determination method and apparatus based on semantic parsing, and computer device
CN111583933A (en) * 2020-04-30 2020-08-25 北京猎户星空科技有限公司 Voice information processing method, device, equipment and medium
CN112530417A (en) * 2019-08-29 2021-03-19 北京猎户星空科技有限公司 Voice signal processing method and device, electronic equipment and storage medium
CN112581938A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Voice breakpoint detection method, device and equipment based on artificial intelligence

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170110116A1 (en) * 2015-10-19 2017-04-20 Google Inc. Speech endpointing
CN107919130A (en) * 2017-11-06 2018-04-17 百度在线网络技术(北京)有限公司 Method of speech processing and device based on high in the clouds
CN109344231A (en) * 2018-10-31 2019-02-15 广东小天才科技有限公司 A kind of method and system of the semantic incomplete corpus of completion
WO2020113918A1 (en) * 2018-12-06 2020-06-11 平安科技(深圳)有限公司 Statement rationality determination method and apparatus based on semantic parsing, and computer device
CN112530417A (en) * 2019-08-29 2021-03-19 北京猎户星空科技有限公司 Voice signal processing method and device, electronic equipment and storage medium
CN112581938A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Voice breakpoint detection method, device and equipment based on artificial intelligence
CN111583933A (en) * 2020-04-30 2020-08-25 北京猎户星空科技有限公司 Voice information processing method, device, equipment and medium

Also Published As

Publication number Publication date
CN113362824B (en) 2024-03-12

Similar Documents

Publication Publication Date Title
CN110196901B (en) Method and device for constructing dialog system, computer equipment and storage medium
KR102447513B1 (en) Self-learning based dialogue apparatus for incremental dialogue knowledge, and method thereof
CN108304372B (en) Entity extraction method and device, computer equipment and storage medium
CN107195303B (en) Voice processing method and device
KR102316063B1 (en) Method and apparatus for identifying key phrase in audio data, device and medium
CN101326572B (en) Speech recognition system with huge vocabulary
US8751226B2 (en) Learning a verification model for speech recognition based on extracted recognition and language feature information
CN110347787B (en) Interview method and device based on AI auxiliary interview scene and terminal equipment
US20170199867A1 (en) Dialogue control system and dialogue control method
CN110021293B (en) Voice recognition method and device and readable storage medium
CN111292751B (en) Semantic analysis method and device, voice interaction method and device, and electronic equipment
CN108027814B (en) Stop word recognition method and device
CN112256845A (en) Intention recognition method, device, electronic equipment and computer readable storage medium
CN108710653B (en) On-demand method, device and system for reading book
Gandhe et al. Using web text to improve keyword spotting in speech
CN111881297A (en) Method and device for correcting voice recognition text
CN110020163B (en) Search method and device based on man-machine interaction, computer equipment and storage medium
CN110473543B (en) Voice recognition method and device
CN114550718A (en) Hot word speech recognition method, device, equipment and computer readable storage medium
CN110175242B (en) Human-computer interaction association method, device and medium based on knowledge graph
CN114492396A (en) Text error correction method for automobile proper nouns and readable storage medium
CN114595692A (en) Emotion recognition method, system and terminal equipment
Hakkani-Tür et al. A discriminative classification-based approach to information state updates for a multi-domain dialog system
CN113362824B (en) Voice recognition method and device and terminal equipment
CN111639160A (en) Domain identification method, interaction method, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant