CN110516043B - Answer generation method and device for question-answering system - Google Patents
Answer generation method and device for question-answering system Download PDFInfo
- Publication number
- CN110516043B CN110516043B CN201910814386.2A CN201910814386A CN110516043B CN 110516043 B CN110516043 B CN 110516043B CN 201910814386 A CN201910814386 A CN 201910814386A CN 110516043 B CN110516043 B CN 110516043B
- Authority
- CN
- China
- Prior art keywords
- conversation
- answer
- question
- user
- word
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/3331—Query processing
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Artificial Intelligence (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
- Machine Translation (AREA)
Abstract
The invention discloses an answer generating method and device for a question-answering system, wherein the answer generating method for the question-answering system comprises the following steps: when a conversation is carried out with a user, a conversation scene and a conversation intention of the user are obtained; when a question of a user is received, identifying characteristic information contained in the question; generating a corresponding answer based on the question of the user; generating at least one tone word according to the conversation intention, the conversation scene and the characteristic information; dynamically inserting the at least one tone word into the beginning and/or end of the corresponding answer to form a processed answer. According to the scheme provided by the method and the device, the corresponding tone words are determined by considering the influence of the conversation intention, the conversation scene and the characteristic information, and then the tone words are dynamically inserted into the answer, so that the answer content is closer to the effect of real person communication, and the user experience is better.
Description
Technical Field
The invention belongs to the technical field of intelligent voice conversation, and particularly relates to an answer generation method and device for a question answering system.
Background
In the related art, the current general dialog design method is as follows:
A. the method comprises the steps that a question-answer pair mode is adopted for conversation content design, a user fills in a question, then an answer is compiled, and after the question is hit, a machine can reply to the corresponding answer;
B. and designing conversation content based on a knowledge graph mode, filling corresponding knowledge content by a user, and automatically generating answers of the user according to different contents.
The inventor discovers that: the above methods all adopt a mode of matching corresponding answers with specified questions or a mode of dynamically adjusting answers, but do not dynamically blend in the intention and node information of the conversation scene, so that the conversation scene cannot be well blended, and the feeling of unnatural conversation and incoordination is caused.
Disclosure of Invention
The embodiment of the invention provides an answer generation method and device for a question-answering system, which are used for solving at least one of the technical problems.
In a first aspect, an embodiment of the present invention provides an answer generation method for a question answering system, including: when a conversation is carried out with a user, a conversation scene and a conversation intention of the user are obtained; when a question of a user is received, identifying characteristic information contained in the question; generating a corresponding answer based on the question of the user; generating at least one tone word according to the dialogue intention, the dialogue scene and the characteristic information; dynamically inserting the at least one tone word to the beginning and/or end of the corresponding answer to form a processed answer.
In a second aspect, an embodiment of the present invention provides an answer generating device for a question answering system, including: the scene intention acquisition module is configured to acquire a conversation scene and a conversation intention of a user when a conversation is carried out with the user; a feature identification module configured to identify feature information included in a question when the question of a user is received; an answer generation module configured to generate a corresponding answer based on the question of the user; the language word generating module is configured to generate at least one language word according to the conversation intention, the conversation scene and the feature information; and a tone word insertion module configured to dynamically insert the at least one tone word into a beginning and/or an end of the corresponding answer to form a processed answer.
In a third aspect, an electronic device is provided, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor, the instructions being executable by the at least one processor to enable the at least one processor to perform the steps of the answer generation method for a question-answering system according to any one of the embodiments of the present invention.
In a fourth aspect, the present invention further provides a computer program product, where the computer program product includes a computer program stored on a non-volatile computer-readable storage medium, where the computer program includes program instructions, and when the program instructions are executed by a computer, the computer executes the steps of the answer generation method for a question-answering system according to any embodiment of the present invention.
According to the scheme provided by the method and the device, the dialog intention, the dialog scene and the feature information of the user question are obtained according to the whole dialog and the user question, then the corresponding tone words are determined according to the dialog intention, the dialog scene and the feature information, and then the tone words are dynamically inserted into the answer, so that the answer content is closer to the effect of real person communication, the intention combing and emotion guiding and feedback can be better carried out on the user, and the user experience is better.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the description below are some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of an answer generation method for a question answering system according to an embodiment of the present invention;
fig. 2 is a block diagram of an answer generating device for a question answering system according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Please refer to fig. 1, which illustrates a flowchart of an embodiment of the answer generating method for a question answering system according to the present application, and the answer generating method for a question answering system according to the present embodiment may be applied to terminals with question answering functions, such as mobile phones, tablets, computers, smart voice tvs, smart speakers, smart dialogue toys, and other existing smart terminals with question answering functions.
As shown in fig. 1, in step 101, when a conversation is performed with a user, a conversation scene and a conversation intention of the user are acquired;
in step 102, when a question of a user is received, identifying feature information contained in the question;
in step 103, generating a corresponding answer based on the question of the user;
in step 104, generating at least one tone word according to the dialogue intention, the dialogue scene and the characteristic information;
at step 105, at least one tone word is dynamically inserted into the beginning and/or end of the corresponding answer to form a processed answer.
In this embodiment, in step 101, the answer generating device for the question-answering system obtains the dialog scene corresponding to the current dialog node and the dialog intention of the user when the user has a dialog, so that the dialog atmosphere can be well grasped. Thereafter, for step 102, when a question of the user is received, feature information included in the question is identified, where the feature information may include a user emotion, an entity included in the question, a language type and a language habit of the user, and the like, and the application is not limited herein. Thereafter, for step 103, a corresponding answer is generated based on the received question of the user. Then, for step 104, at least one tone word is generated according to the user's dialog intention, the dialog scene and the question-related feature information. Finally, for step 105, the generated tone words are inserted into the beginning and/or end of the corresponding answer, so as to form the answer with the tone words to be fed back to the user.
In the method of the embodiment, the corresponding language word is added into the answer by considering the dialogue intention, the dialogue scene and the characteristic information in the question of the user, so that the answer is closer to the dialogue of a real person, and the dialogue experience of the user is improved.
In some optional embodiments, the method further comprises: determining rhythm of the broadcasted answers based on the conversation intention, the conversation scene and the user emotion; and performing voice broadcasting based on the answer after the prosody synthesis processing. Therefore, the rhythm during broadcasting can be determined according to the known information, and then broadcasting is carried out by utilizing the corresponding rhythm, so that the function of the language words can be better played.
In some alternative embodiments, the characteristic information includes user emotions and entities. Therefore, by considering the emotion and the entity of the user, the emotion of the user can be known to a certain extent, the emotion of the user can be guided conveniently by adding the tone words, and the conversation experience of the user is improved.
In a further optional embodiment, when the feature information is an entity, the method further includes: when a question of a user is received, identifying an entity contained in the question; generating at least one tone word according to the entity; and dynamically inserting at least one mood word in front of and/or behind the entity. Therefore, the user can have better conversation experience by adding corresponding tone words before and after the entity.
In some optional embodiments, generating at least one mood word based on the dialog intent, the dialog scene, and the feature information comprises: at least maintaining a corresponding relation table aiming at the conversation intention, the conversation scene, the characteristic information and the tone words so as to have the corresponding tone words when the preset conversation intention and/or the preset conversation scene and/or the preset characteristic information exist, wherein the tone words corresponding to the conversation intention, the conversation scene and the characteristic information have preset priority; generating at least one corresponding tone word according to the conversation intention, the conversation scene and the characteristic information; and performing de-duplication processing on at least one corresponding tone word based on the priority of the corresponding tone word, wherein the de-duplication processing comprises removing duplicates and removing tone words with the same type, and the tone words with the same type comprise being positioned at the beginning and the end of a sentence at the same time. Therefore, various different conversation intents, different conversation scenes and different feature information can be responded better and the user experience is better only by maintaining the corresponding relation table subsequently.
The following description is provided to enable those skilled in the art to better understand the present disclosure by describing some of the problems encountered by the inventors in implementing the present disclosure and by describing one particular embodiment of the finally identified solution.
The invention provides a method for dynamically adding dialogue mood words in a dialogue system with topic fused QA question answering and knowledge cards. In the dialogue system, through intention combing and guiding for a user in a dialogue, through recording of dialogue states of integrating QA questions and answers with knowledge cards and recognition of emotions of the user through dialogue nodes, when the user gives corresponding answers, different state and emotion words are added, and the comprehensive technology related to dialogue management, intention recognition, scene dialogue, emotion recognition and the like needs to be integrated.
The invention has the technical innovation points that:
dynamically generating effective dialogue mood words;
greatly reducing the input of dialogue design answers.
The invention comprises two main processes: a record of dialog intentions, contextual entities, emotions, dialog nodes; and the other is to dynamically generate and insert dialogue language words based on the related information.
1. Recording process based on conversation intention, context entity, emotion and conversation node
Dialog text-generating intention, entity and emotion;
b, recording the information of the current session node;
c generating corresponding language word according to the above information
2. Process for dynamically generating and inserting dialogue language and word based on related information
A, inserting corresponding tone words at the beginning of an answer;
b, adding corresponding tone words before or after the related entities;
c, inserting corresponding tone words behind the answers;
in the model training stage, two systems, a keyword detection system and a speech rate detection system, are trained. The input of the keyword detection system is training data, namely a large number of sound records containing or not containing the awakening words, and whether the output sound records contain the awakening words or not is judged. The input of the speech speed detection system is the recording data, and the output is the speed of the recording speech speed, which is essentially a two-classifier.
In the testing stage, the testing record is sent to a speech speed detection and keyword detection system; the speed of speech detecting system detects the speed of speech, if it is fast speed of speech, the keyword detecting system uses the sliding window of smaller length, if it is slow, the sliding window of larger length is used; and finally giving a wake-up result, namely whether the key word is available or not.
The embodiment can at least realize the following technical effects:
according to the scheme provided by the embodiment, the influence of the speech rate on the awakening result is considered, the speech rate detection is added, and the sliding windows with different lengths are adopted for the voices with different speech rates, so that the influence of the speech rate on the awakening result can be greatly reduced.
Please refer to fig. 2, which illustrates a block diagram of an answer generating device for a question answering system according to an embodiment of the present invention.
As shown in fig. 2, an answer generating apparatus 200 for a question-answering system includes a scene intent acquiring module 210, a feature identifying module 220, an answer generating module 230, an adversary word generating module 240, and an adversary word generating module 240.
The scene intention acquisition module 210 is configured to acquire a dialog scene and a dialog intention of a user when performing a dialog with the user; a feature recognition module 220 configured to, when a question of a user is received, recognize feature information included in the question; an answer generating module 230 configured to generate a corresponding answer based on the question of the user; a tone word generating module 240 configured to generate at least one tone word according to the dialog intention, the dialog scene, and the feature information; and a tone word insertion module 250 configured to dynamically insert the at least one tone word into the beginning and/or the end of the corresponding answer to form a processed answer.
In some optional embodiments, the answer generating apparatus 200 for the question-answering system further includes: a prosody determining module (not shown) configured to determine a prosody for broadcasting the processed answer based on the dialog intention, the dialog scene, and the user emotion; and a broadcast module (not shown in the figure) configured to synthesize the voice broadcast of the processed answer based on the rhythm and perform the voice broadcast.
In some optional embodiments, the characteristic information includes user emotion and entity.
It should be understood that the modules recited in fig. 2 correspond to various steps in the method described with reference to fig. 1. Thus, the operations and features described above for the method and the corresponding technical effects are also applicable to the modules in fig. 2, and are not described again here.
It should be noted that the modules in the embodiments of the present application are not limited to the scheme of the present application, for example, the answer generation module may be described as a module that generates a corresponding answer based on the question of the user. In addition, the relevant function module may also be implemented by a hardware processor, for example, the answer generation module may also be implemented by a processor, which is not described herein again.
In other embodiments, the present invention further provides a non-volatile computer storage medium, where the computer storage medium stores computer-executable instructions, where the computer-executable instructions may execute the answer generation method for the question-answering system in any of the above method embodiments;
as one embodiment, a non-volatile computer storage medium of the present invention stores computer-executable instructions configured to:
when a conversation is carried out with a user, a conversation scene and a conversation intention of the user are obtained;
when a question of a user is received, identifying characteristic information contained in the question;
generating a corresponding answer based on the question of the user;
generating at least one tone word according to the conversation intention, the conversation scene and the characteristic information;
dynamically inserting the at least one tone word into the beginning and/or end of the corresponding answer to form a processed answer.
The non-volatile computer readable storage medium may include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created from use of the answer generating device for the question-answering system, and the like. Further, the non-volatile computer-readable storage medium may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some embodiments, the non-transitory computer readable storage medium optionally includes a memory remotely located from the processor, which may be connected over a network to an answer generation device for the question-answering system. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
Embodiments of the present invention further provide a computer program product, where the computer program product includes a computer program stored on a non-volatile computer-readable storage medium, and the computer program includes program instructions, and when the program instructions are executed by a computer, the computer executes any one of the answer generation methods for a question answering system.
Fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, and as shown in fig. 3, the electronic device includes: one or more processors 310 and a memory 320, one processor 310 being illustrated in fig. 3. The apparatus for the answer generating method of the question-answering system may further include: an input device 330 and an output device 340. The processor 310, memory 320, input device 330, and output device 340 may be connected by a bus or other means, such as by a bus connection in fig. 3. The memory 320 is a non-volatile computer-readable storage medium as described above. The processor 310 executes various functional applications of the server and data processing by running the nonvolatile software programs, instructions and modules stored in the memory 320, that is, implements the answer generating method for the question-answering system of the above-described method embodiments. The input device 330 may receive input numeric or character information and generate key signal inputs related to user settings and function control of the answer generating device for the question-answering system. The output device 340 may include a display device such as a display screen.
The product can execute the method provided by the embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method. For technical details that are not described in detail in this embodiment, reference may be made to the method provided by the embodiment of the present invention.
As an embodiment, the electronic device is applied to an answer generating apparatus for a question answering system, and includes: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to cause the at least one processor to:
when a conversation is carried out with a user, a conversation scene and a conversation intention of the user are obtained;
when a question of a user is received, identifying characteristic information contained in the question;
generating a corresponding answer based on the question of the user;
generating at least one tone word according to the conversation intention, the conversation scene and the characteristic information;
dynamically inserting the at least one tone word into the beginning and/or end of the corresponding answer to form a processed answer.
The electronic device of the embodiments of the present application exists in various forms, including but not limited to:
(1) a mobile communication device: such devices are characterized by mobile communications capabilities and are primarily targeted at providing voice, data communications. Such terminals include smart phones (e.g., iphones), multimedia phones, functional phones, and low-end phones, among others.
(2) Ultra mobile personal computer device: the equipment belongs to the category of personal computers, has calculation and processing functions and generally has the characteristic of mobile internet access. Such terminals include: PDA, MID, and UMPC devices, etc., such as ipads.
(3) A portable entertainment device: such devices can display and play multimedia content. Such devices include audio and video players (e.g., ipods), handheld game consoles, electronic books, as well as smart toys and portable car navigation devices.
(4) The server is similar to a general computer architecture, but has higher requirements on processing capacity, stability, reliability, safety, expandability, manageability and the like because of the need of providing high-reliability service.
(5) And other electronic devices with data interaction functions.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods of the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (5)
1. An answer generating method for a question-answering system, comprising:
when a conversation is carried out with a user, a conversation scene and a conversation intention of the user are obtained;
when a question of a user is received, identifying characteristic information contained in the question, wherein the characteristic information comprises user emotion and an entity;
generating a corresponding answer based on the question of the user;
generating at least one language meaning word according to the conversation intention, the conversation scene and the characteristic information, wherein the at least one language meaning word comprises a language meaning word generated according to the entity;
dynamically inserting the at least one tone word into the beginning and/or the end of the corresponding answer, and dynamically inserting the tone words generated according to the entity into the front and/or the back of the entity to form a processed answer;
wherein the generating at least one linguistic word according to the dialogue intention, the dialogue scene and the feature information comprises:
at least one corresponding relation table is maintained for the conversation intention, the conversation scene, the feature information and the tone words so as to have corresponding tone words when the preset conversation intention and/or the preset conversation scene and/or the preset feature information exist, wherein the tone words corresponding to the conversation intention, the conversation scene and the feature information have preset priority;
generating at least one corresponding tone word according to the conversation intention, the conversation scene and the characteristic information;
and based on the priority of the corresponding tone word, performing de-duplication processing on the at least one corresponding tone word, wherein the de-duplication processing comprises removing duplicates and removing tone words of the same type, and the tone words of the same type comprise being positioned at the beginning and the end of a sentence at the same time.
2. The method of claim 1, wherein the method further comprises:
determining a rhythm to broadcast the processed answer based on the dialog intent, the dialog scene, and the user emotion;
and synthesizing the voice broadcast of the processed answer based on the rhythm and carrying out the voice broadcast.
3. An answer generating apparatus for a question-answering system, comprising:
the scene intention acquisition module is configured to acquire a conversation scene and a conversation intention of a user when a conversation is carried out with the user;
the system comprises a characteristic identification module, a processing module and a processing module, wherein the characteristic identification module is configured to identify characteristic information contained in a question when the question of a user is received, wherein the characteristic information comprises user emotion and an entity;
an answer generation module configured to generate a corresponding answer based on the question of the user;
a language word generating module configured to generate at least one language word according to the dialog intention, the dialog scene, and the feature information, wherein the at least one language word includes a language word generated according to the entity;
a tone word insertion module configured to dynamically insert the at least one tone word into a beginning and/or an end of the corresponding answer, and dynamically insert a tone word generated according to the entity into a front and/or a rear of the entity to form a processed answer;
wherein the linguistic word generation module comprises a module configured to:
at least one corresponding relation table is maintained aiming at the conversation intention, the conversation scene, the feature information and the language word so as to have the corresponding language word when the preset conversation intention and/or the preset conversation scene and/or the preset feature information exist, wherein the language words corresponding to the conversation intention, the conversation scene and the feature information have the preset priority;
generating at least one corresponding tone word according to the conversation intention, the conversation scene and the characteristic information;
and based on the priority of the corresponding tone word, performing de-duplication processing on the at least one corresponding tone word, wherein the de-duplication processing comprises removing duplicates and removing tone words of the same type, and the tone words of the same type comprise being positioned at the beginning and the end of a sentence at the same time.
4. The apparatus of claim 3, wherein the apparatus further comprises:
a prosody determining module configured to determine a prosody for broadcasting the processed answer based on the dialog intention, the dialog scene, and the user emotion;
and the broadcasting module is configured to synthesize the voice broadcast of the processed answer based on the rhythm and perform voice broadcast.
5. An electronic device, comprising: at least one processor, and a memory communicatively coupled to the at least one processor, wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the steps of the method of claim 1 or 2.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910814386.2A CN110516043B (en) | 2019-08-30 | 2019-08-30 | Answer generation method and device for question-answering system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910814386.2A CN110516043B (en) | 2019-08-30 | 2019-08-30 | Answer generation method and device for question-answering system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110516043A CN110516043A (en) | 2019-11-29 |
CN110516043B true CN110516043B (en) | 2022-09-20 |
Family
ID=68628421
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910814386.2A Active CN110516043B (en) | 2019-08-30 | 2019-08-30 | Answer generation method and device for question-answering system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110516043B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112015919A (en) * | 2020-09-15 | 2020-12-01 | 重庆广播电视大学重庆工商职业学院 | Dialogue management method based on learning auxiliary knowledge graph |
CN115273852A (en) * | 2022-06-21 | 2022-11-01 | 北京小米移动软件有限公司 | Voice response method and device, readable storage medium and chip |
Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008128423A1 (en) * | 2007-04-19 | 2008-10-30 | Shenzhen Institute Of Advanced Technology | An intelligent dialog system and a method for realization thereof |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10193850B2 (en) * | 2016-03-30 | 2019-01-29 | Notion Ai, Inc. | Discovering questions, directives, and requests from, and prioritizing replies to, a preferred message sender method and apparatus |
CN105929964A (en) * | 2016-05-10 | 2016-09-07 | 海信集团有限公司 | Method and device for human-computer interaction |
CN108597509A (en) * | 2018-03-30 | 2018-09-28 | 百度在线网络技术(北京)有限公司 | Intelligent sound interacts implementation method, device, computer equipment and storage medium |
CN109684459A (en) * | 2018-12-28 | 2019-04-26 | 联想(北京)有限公司 | A kind of information processing method and device |
-
2019
- 2019-08-30 CN CN201910814386.2A patent/CN110516043B/en active Active
Patent Citations (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008128423A1 (en) * | 2007-04-19 | 2008-10-30 | Shenzhen Institute Of Advanced Technology | An intelligent dialog system and a method for realization thereof |
Non-Patent Citations (1)
Title |
---|
口语对话中冗余词汇识别方法研究;翟飞飞等;《中文信息学报》;20110515(第03期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110516043A (en) | 2019-11-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3779972B1 (en) | Voice wake-up method and apparatus | |
CN108962233B (en) | Voice conversation processing method and system for voice conversation platform | |
CN109951743A (en) | Barrage information processing method, system and computer equipment | |
CN108920128B (en) | Operation method and system of presentation | |
CN115329206B (en) | Voice outbound processing method and related device | |
CN106384591A (en) | Method and device for interacting with voice assistant application | |
CN110503944B (en) | Method and device for training and using voice awakening model | |
CN109493888B (en) | Cartoon dubbing method and device, computer-readable storage medium and electronic equipment | |
CN109460503B (en) | Answer input method, answer input device, storage medium and electronic equipment | |
CN111832308A (en) | Method and device for processing consistency of voice recognition text | |
CN111081280A (en) | Text-independent speech emotion recognition method and device and emotion recognition algorithm model generation method | |
CN112653902A (en) | Speaker recognition method and device and electronic equipment | |
CN110516043B (en) | Answer generation method and device for question-answering system | |
CN110111782B (en) | Voice interaction method and device | |
CN114390220B (en) | Animation video generation method and related device | |
CN111753508A (en) | Method and device for generating content of written works and electronic equipment | |
CN110517692A (en) | Hot word audio recognition method and device | |
CN111107442A (en) | Method and device for acquiring audio and video files, server and storage medium | |
CN117253478A (en) | Voice interaction method and related device | |
CN110473524B (en) | Method and device for constructing voice recognition system | |
CN110931014A (en) | Speech recognition method and device based on regular matching rule | |
CN114694629B (en) | Voice data amplification method and system for voice synthesis | |
CN112988956A (en) | Method and device for automatically generating conversation and method and device for detecting information recommendation effect | |
CN112447177A (en) | Full duplex voice conversation method and system | |
CN113643706B (en) | Speech recognition method, device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 215123 building 14, Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou City, Jiangsu Province Applicant after: Sipic Technology Co.,Ltd. Address before: 215123 building 14, Tengfei Innovation Park, 388 Xinping street, Suzhou Industrial Park, Suzhou City, Jiangsu Province Applicant before: AI SPEECH Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |