CN116561286A - Dialogue method and device - Google Patents
Dialogue method and device Download PDFInfo
- Publication number
- CN116561286A CN116561286A CN202310830861.1A CN202310830861A CN116561286A CN 116561286 A CN116561286 A CN 116561286A CN 202310830861 A CN202310830861 A CN 202310830861A CN 116561286 A CN116561286 A CN 116561286A
- Authority
- CN
- China
- Prior art keywords
- reasoning
- text
- llm
- user
- personification
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 114
- 230000008569 process Effects 0.000 claims description 70
- 239000008186 active pharmaceutical agent Substances 0.000 claims description 44
- 230000002452 interceptive effect Effects 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 12
- 238000004590 computer program Methods 0.000 claims description 10
- 230000003993 interaction Effects 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 2
- 239000003814 drug Substances 0.000 claims description 2
- 238000013459 approach Methods 0.000 abstract description 2
- 238000010586 diagram Methods 0.000 description 12
- 230000006870 function Effects 0.000 description 10
- 238000004891 communication Methods 0.000 description 5
- 206010020772 Hypertension Diseases 0.000 description 4
- 235000005911 diet Nutrition 0.000 description 4
- 230000037213 diet Effects 0.000 description 4
- 235000013305 food Nutrition 0.000 description 4
- 241000408747 Lepomis gibbosus Species 0.000 description 3
- 230000009471 action Effects 0.000 description 3
- 230000008451 emotion Effects 0.000 description 3
- 235000020236 pumpkin seed Nutrition 0.000 description 3
- 239000013589 supplement Substances 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 241000287828 Gallus gallus Species 0.000 description 2
- 235000008331 Pinus X rigitaeda Nutrition 0.000 description 2
- 241000018646 Pinus brutia Species 0.000 description 2
- 235000011613 Pinus brutia Nutrition 0.000 description 2
- 244000000231 Sesamum indicum Species 0.000 description 2
- 235000003434 Sesamum indicum Nutrition 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 235000013330 chicken meat Nutrition 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 230000002218 hypoglycaemic effect Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000003058 natural language processing Methods 0.000 description 2
- 235000014571 nuts Nutrition 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 244000226021 Anacardium occidentale Species 0.000 description 1
- 241000272525 Anas platyrhynchos Species 0.000 description 1
- OKTJSMMVPCPJKN-UHFFFAOYSA-N Carbon Chemical compound [C] OKTJSMMVPCPJKN-UHFFFAOYSA-N 0.000 description 1
- 208000013016 Hypoglycemia Diseases 0.000 description 1
- 108010006519 Molecular Chaperones Proteins 0.000 description 1
- 240000002853 Nelumbo nucifera Species 0.000 description 1
- 235000006508 Nelumbo nucifera Nutrition 0.000 description 1
- 235000006510 Nelumbo pentapetala Nutrition 0.000 description 1
- ZLMJMSJWJFRBEC-UHFFFAOYSA-N Potassium Chemical compound [K] ZLMJMSJWJFRBEC-UHFFFAOYSA-N 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000036772 blood pressure Effects 0.000 description 1
- 235000020226 cashew nut Nutrition 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 229910021389 graphene Inorganic materials 0.000 description 1
- 230000008821 health effect Effects 0.000 description 1
- 230000001631 hypertensive effect Effects 0.000 description 1
- 210000004185 liver Anatomy 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
- 235000015097 nutrients Nutrition 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 229910052700 potassium Inorganic materials 0.000 description 1
- 239000011591 potassium Substances 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 102000004169 proteins and genes Human genes 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 238000011144 upstream manufacturing Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/30—Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
- G06F16/33—Querying
- G06F16/332—Query formulation
- G06F16/3329—Natural language query formulation or dialogue systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N5/00—Computing arrangements using knowledge-based models
- G06N5/04—Inference or reasoning models
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- General Physics & Mathematics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Various embodiments disclosed herein provide a dialog method and apparatus. No longer using existing approaches, the user expresses text for intent understanding and uses the intent of the understanding to match the fixed answer. The natural language understanding capability of the LLM is utilized, so that the LLM performs personification reasoning according to the user expression text to reason out reasonable answers which accord with the user intention contained in the user expression text. And then, continuing to use LLM, further optimizing the expression mode of the reasonable answer, so that the answer is more personified, and finally returning to the user, wherein the answer meets the user intention and can bring personified feeling to the user.
Description
Technical Field
Embodiments of the present disclosure relate to the field of artificial intelligence, and in particular, to a dialogue method and apparatus.
Background
Currently, a user may implement a certain business intent by conducting a conversation with a conversation system. For example, the user can perform voice conversation with a conversation system built in the intelligent sound box, so that the conversation system can inquire weather conditions for the user.
Existing dialog systems typically determine a user's business intent based on what the user expresses and then return a fixed answer to the user that matches the business intent.
However, the above method has a problem that the judgment of the business intention of the user is not accurate enough to cause the fixed answer to not meet the user's requirement, and a problem that the fixed answer brings a feeling of not being personified to the user.
Disclosure of Invention
The technical schemes provided by the embodiments of the specification are as follows:
according to a first aspect of various embodiments of the present specification, a dialogue method is proposed, applied to a dialogue system, the method comprising:
determining a preset personification reasoning paradigm, wherein the personification reasoning paradigm comprises at least one round of personification reasoning process, and each round of personification reasoning process comprises: a thinking step, an operating step and a finding step; the step of thinking is to think with the aim of responding to the text to be considered, judge whether the information inquiry operation is needed to advance thinking, if the information inquiry operation is needed, determine the operation content of the information inquiry operation needed to be performed, and if the information inquiry operation is not needed, take the text to be considered as an reasoning result; the operation step is to execute the operation content of the information query operation determined in the thinking step by calling one or more information query interface APIs; the finding step is to determine the information inquiry result determined in the operation step, and re-use the determined information inquiry result as the text to be considered;
Constructing first-class indication information based on the personification reasoning normal form, and inputting the first-class indication information into a large language model LLM to indicate the LLM to interact with the dialogue system according to the personification reasoning normal form;
acquiring a user expression text when a user performs a dialogue with the dialogue system;
taking the text of the user expression content as a text to be considered, inputting the text to be considered into the LLM, so that the LLM outputs an reasoning result through at least one round of anthropomorphic reasoning process;
constructing second-type indication information based on preset personality settings of the dialogue system, the user expression text and the reasoning result;
and inputting the second type of indication information into LLM, outputting a personified answer responding to the text expressed by the user, and returning the personified answer to the user.
According to a second aspect of embodiments of the present specification, there is provided a dialog device for use in a dialog system, the device comprising:
the determining module is used for determining a preset personification reasoning paradigm, wherein the personification reasoning paradigm comprises at least one round of personification reasoning process, and each round of personification reasoning process comprises the following steps: a thinking step, an operating step and a finding step; the step of thinking is to think with the aim of responding to the text to be considered, judge whether the information inquiry operation is needed to advance thinking, if the information inquiry operation is needed, determine the operation content of the information inquiry operation needed to be performed, and if the information inquiry operation is not needed, take the text to be considered as an reasoning result; the operation step is to execute the operation content of the information query operation determined in the thinking step by calling one or more information query interface APIs; the finding step is to determine the information inquiry result determined in the operation step, and re-use the determined information inquiry result as the text to be considered;
The first processing module constructs first type indication information based on the personification reasoning normal form, and inputs the first type indication information into a large language model LLM to indicate the LLM to interact with the dialogue system according to the personification reasoning normal form;
the acquisition module acquires a user expression text when a user performs a dialogue with the dialogue system;
the interaction module takes the text of the user expression content as a text to be considered, inputs the text to be considered into the LLM, so that the LLM outputs an inference result through at least one round of personification inference process;
the second processing module is used for constructing second-type indication information based on preset personality settings of the dialogue system, the user expression text and the reasoning result;
and the answer module inputs the second type of indication information into the LLM, outputs a personified answer responding to the user expression text, and returns the personified answer to the user.
According to a third aspect of embodiments of the present specification, a computing device is presented, comprising a memory, a processor; the memory is for storing computer instructions executable on a processor for implementing the method of the first aspect when the computer instructions are executed.
According to a fourth aspect of embodiments of the present specification, there is provided a computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method of the first aspect.
In the above technical solution, the existing method is no longer used, and the intention understanding (such as extracting keywords to perform intention matching) is performed on the user expression text and the intention of understanding is utilized to match the fixed answer. But utilizes the natural language understanding capability of a large language model (LLM, large Language Model) to lead the LLM to perform personification reasoning according to the user expression text so as to reason out reasonable answers (i.e. reasoning results) which accord with the user intention contained in the user expression text. And then, continuing to use LLM, further optimizing the expression mode of the reasonable answer, so that the answer is more personified, and finally returning to the user, wherein the answer meets the user intention and can bring personified feeling to the user.
Specifically, a preset personified reasoning paradigm needs to be indicated to the LLM, so that the LLM interacts with the dialogue system according to the personified reasoning paradigm, and the dialogue system can obtain reasonable answers (i.e. reasoning results) which accord with the user intention contained in the user expression text by interacting with the LLM. The personification reasoning paradigm can be used for referencing the REACT framework, and the personification reasoning paradigm comprises at least one round of personification reasoning process, and each round of personification reasoning process comprises: the three steps are sequential, namely thinking-operation-discovery. The step of thinking is to respond to the text to be considered, judge whether the information inquiry operation is needed to advance thinking, if the information inquiry operation is needed, determine the operation content of the information inquiry operation needed, and if the information inquiry operation is not needed, take the text to be considered as an reasoning result. The operation step is to execute the operation content of the information inquiry operation determined by the thinking step. The finding step is to determine the information inquiry result determined in the operation step and re-use the determined information inquiry result as the text to be considered, so that the next thought step in the anthropomorphic reasoning process can continue to think for the re-determined text to be considered.
In addition, considering that the reasoning of the true person often depends on some information sources, one or more information inquiry interface APIs can be called in the operation steps in the personification reasoning process to execute the operation content of the information inquiry operation determined in the thinking step.
After the dialog system obtains reasonable answers (i.e. reasoning results) which accord with the intentions contained in the user expression text by using the LLM, the personality setting (meaning the personified expression style) of the dialog system, the user expression text and the reasonable answers can be further formed into an instruction input to the LLM, so that the LLM can give the personified answers by referring to the personality setting and the original text of the user expression text aimed at by the answers on the basis of the reasonable answers.
By the technical scheme, the answer returned to the user by the dialogue system not only accords with the user intention, but also brings personified dialogue experience to the user.
Drawings
Fig. 1 schematically provides a schematic diagram of the technical solution provided in the present disclosure.
Fig. 2 is an exemplary flow chart of a dialog method.
Fig. 3 is a schematic diagram of a computer-readable storage medium provided by the present disclosure.
Fig. 4 is a schematic structural diagram of a computing device provided by the present disclosure.
In the drawings, the same or corresponding reference numerals indicate the same or corresponding parts. Any number of elements in the figures are for illustration and not limitation, and any naming is used for distinction only and not for any limiting sense.
Detailed Description
In order to make the technical solutions in the present specification better understood by those skilled in the art, the technical solutions in the embodiments of the present specification will be clearly and completely described below with reference to the drawings in the embodiments of the present specification, and it is obvious that the described embodiments are only some embodiments of the present specification, not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are intended to be within the scope of the present disclosure.
It should be noted that: in other embodiments, the steps of the corresponding method are not necessarily performed in the order shown and described in this specification. In some other embodiments, the method may include more or fewer steps than described in this specification. Furthermore, individual steps described in this specification, in other embodiments, may be described as being split into multiple steps; while various steps described in this specification may be combined into a single step in other embodiments.
Some concepts involved in the embodiments of the present disclosure are described below.
Dialog system: a software system that performs one or more rounds of conversations with a user.
The user expresses text: each input signal (either in speech form, text form, or action form) presented by the user for the dialog system may form a user-expressed text. The dialogue system expresses text for each user and gives corresponding answers during dialogue with the user. The technical proposal of the present disclosure focuses on whether the answer given by the dialogue system for each user expression text is enough to meet the intention expressed by the user and is enough to be personified.
Large language model (LLM, large Language Model): LLM is a neural network-based natural language processing model, such as BERT, GPT, etc., for modeling a text sequence, predicting the occurrence probability of the next word or sentence, etc. In natural language processing, language models are widely used in a variety of tasks such as text generation, automatic question-answering, machine translation, and the like.
Prompt learning (prompt learning): instruction learning may also be referred to as having the LLM interact according to instructions by inputting instructions to the LLM. For example, in a text emotion classification task, for The "I love this movie" input, a pattern such as "real ___" may be added to The input, then LLM is filled with answers representing emotion such as "great", "fant" and so on, and finally The answer is converted into emotion classification labels, so that by selecting an appropriate real, we can control model prediction output, so that a completely unsupervised training LLM can be used to solve various downstream tasks. Essentially, all downstream tasks are unified into a pretraining task; and designing a template which is matched with the upstream pre-training task in a comparison way, converting the data of the downstream task into a natural language form, and fully mining the capability of the pre-training model.
The ReAct framework: the REAct is a generic paradigm that combines reasoning and behavior with LLM. The REAct prompts the LLM to generate verbal reasoning trajectories and operations for the task in a manner that mimics human thinking. This allows the system to perform dynamic reasoning to create, maintain and adjust operational plans while also supporting interactions with external environments (e.g., wikipedia) to incorporate additional information into the reasoning, which, if interactive, generally follows the reasoning paradigm of (thinking-operation-discovery).
In various embodiments provided by the present disclosure, the existing approach is no longer used to understand intent of the user expressed text (e.g., extracting keywords for intent matching) and to utilize the understood intent to match the fixed answer. The natural language understanding capability of the LLM is utilized, so that the LLM performs personification reasoning according to the user expression text to reason out reasonable answers (namely reasoning results) which accord with the user intention contained in the user expression text. And then, continuing to use LLM, further optimizing the expression mode of the reasonable answer, so that the answer is more personified, and finally returning to the user, wherein the answer meets the user intention and can bring personified feeling to the user.
Specifically, a preset personified reasoning paradigm needs to be indicated to the LLM, so that the LLM interacts with the dialogue system according to the personified reasoning paradigm, and the dialogue system can obtain reasonable answers (i.e. reasoning results) which accord with the user intention contained in the user expression text by interacting with the LLM. The personification reasoning paradigm can be used for referencing the REACT framework, and the personification reasoning paradigm comprises at least one round of personification reasoning process, and each round of personification reasoning process comprises: the three steps are sequential, namely thinking-operation-discovery. The step of thinking is to respond to the text to be considered, judge whether the information inquiry operation is needed to advance thinking, if the information inquiry operation is needed, determine the operation content of the information inquiry operation needed, and if the information inquiry operation is not needed, take the text to be considered as an reasoning result. The operation step is to execute the operation content of the information inquiry operation determined by the thinking step. The finding step is to determine the information inquiry result determined in the operation step and re-use the determined information inquiry result as the text to be considered, so that the next thought step in the anthropomorphic reasoning process can continue to think for the re-determined text to be considered.
In addition, considering that the reasoning of the true person often depends on some information sources, one or more information inquiry interface APIs can be called in the operation steps in the personification reasoning process to execute the operation content of the information inquiry operation determined in the thinking step.
After the dialog system obtains reasonable answers (i.e. reasoning results) which accord with the intentions contained in the user expression text by using the LLM, the personality setting (meaning the personified expression style) of the dialog system, the user expression text and the reasonable answers can be further formed into an instruction input to the LLM, so that the LLM can give the personified answers by referring to the personality setting and the original text of the user expression text aimed at by the answers on the basis of the reasonable answers.
Referring to fig. 1, fig. 1 schematically provides a schematic diagram of the technical solution provided in the present disclosure. By the technical scheme, the answer returned to the user by the dialogue system not only accords with the user intention, but also brings personified dialogue experience to the user.
The following describes the above technical scheme in detail with reference to the accompanying drawings.
Fig. 2 exemplarily provides a flow diagram of a dialogue method, including:
s200: and determining a preset anthropomorphic reasoning paradigm.
The method shown in fig. 2 is applied to a dialog system.
The personified inference paradigm may include at least one round of personified inference processes. While the personified reasoning process of each round may include a thinking step, an operating step, a discovery step.
The thinking step: and aiming at responding to the text to be considered, judging whether the information query operation is needed to advance the thinking, if the information query operation is determined to be needed, determining the operation content of the information query operation needed to be performed, and if the information query operation is determined not to be needed, taking the text to be considered as an reasoning result.
The operation steps are as follows: and executing the operation content of the information query operation determined by the thinking step by calling one or more information query interface APIs.
The discovery step: and determining the information inquiry result determined in the operation step, and re-taking the determined information inquiry result as the text to be considered.
The personified reasoning paradigm can be set by referring to the REAct framework. The following example provides a multiple round personification reasoning process that the personification reasoning paradigm includes:
[ problem: what is the elevation range of the eastern region of the Colorado mountain area extended to the region?
Thinking 1: i need to search the Colorado mountain making belt, find the area that the east of the Colorado mountain making belt extends, and then find the altitude range of that area.
Operation 1: search [ Colorado mountain area ].
Observation (discovery) 1: the Colorado mountain making belt is a section of the mountain making sport of the Colorado and the surrounding areas.
Thinking 2: it does not mention the eastern region. I need to look into the eastern area.
Operation 2: find [ eastern region ].
Observation 2: (result 1/1) the eastern region extends to the plateau and is called the Yuanzhen mountain area.
Thinking 3: the eastern region of the corolas mountain making sport extends to the plateau. I need to search for a plateau to find its altitude range.
Operation 3: search [ high plains ].
Observation 3: plateau refers to one of two distinct land areas.
Thinking 4: i need to search for high plains (the United states).
Operation 4: search [ high plains (united states) ].
Observation 4: the plateau is a partition of the major plateau. Altitude in the plateau varies from 1800 to 7000 feet (550 to 2130 meters) from east to west.
Thinking 5: the altitude of the plateau is from 1800 to 7000 feet so the answer is 1800 to 7000 feet.
Operation 5: end [1800 to 7000 feet ]
……】
S202: and constructing first-class indication information based on the personification reasoning paradigm, and inputting the first-class indication information into a large language model LLM.
Corpus under a specified business scene can be used in advance to construct a plurality of anthropomorphic reasoning process samples; and performing fine-tuning on the LLM based on the plurality of anthropomorphic reasoning process samples. Thus, the personification reasoning capacity of the LLM in the appointed business scene can be improved.
The purpose of step S202 is to instruct the LLM to interact with the dialog system according to the personified reasoning paradigm.
In some embodiments, a first type of indication information may be constructed that characterizes the personified reasoning paradigm, the set of APIs that the operational steps may call, and several personified reasoning process reference examples. In this way, the first type of indication information may be input into a large language model LLM to instruct the LLM to refer to the several personified reasoning process reference examples according to the personified reasoning paradigm and call one or more APIs in the API set to interact with the dialog system.
The API set described above includes at least one of: weather platform APIs, drug query APIs, one-touch drive APIs, browser search APIs, database APIs, math calculation APIs, insurance query APIs, insurance recommendation APIs, and the like.
The API set described above may also include: task-type dialog APIs, knowledge-graph dialog APIs, knowledge-base dialog APIs, chat APIs, etc.
S204: and acquiring the user expression text when the user performs a dialogue with the dialogue system.
S206: and taking the text of the user expression content as a text to be considered, inputting the text to be considered into the LLM, so that the LLM outputs an inference result through at least one round of anthropomorphic inference process.
It is easy to understand that a simple implementation that a person skilled in the art can easily think according to step S206 is to input the text to be considered into LLM, let LLM automatically complete the personified reasoning process of multiple rounds, and finally output the reasoning result to the dialogue system.
The disclosure of S206 also provides a more complex but better specific implementation manner, which includes:
creating an empty personification reasoning process record, taking the text of the user expression content as a text to be considered, filling the personification reasoning process record, inputting the personification reasoning process record into an LLM, and iteratively executing the following steps:
acquiring interactive feedback of LLM output;
if the interactive feedback is an reasoning result, ending iteration;
if the interactive feedback is the operation content of the information inquiry operation to be performed, filling the operation content into the latest thinking step in the reasoning process record, and then inputting the reasoning process record into the LLM;
If the interactive feedback is an operation process of executing operation content of the information query operation, filling the operation process into the latest operation step in the reasoning process record, and then inputting the reasoning process record into the LLM;
and if the interactive feedback is the redetermined text to be considered, filling the text to be considered into the latest discovery step in the reasoning process record, and then inputting the reasoning process record into the LLM.
Furthermore, in some embodiments, the steps of iteratively performing above may further include:
and if the interactive feedback is lack of reference information on which the operation content for executing the information query operation depends, acquiring the reference information, filling the reference information into operation input associated with the latest operation step in the reasoning process record, and inputting the reasoning process record into the LLM.
Further, in some embodiments, an inquiry statement for inquiring about the reference information may be returned to the user; and receiving the reference information provided by the user.
As an example, the first type of indication information may be input into the LLM along with the inference process record, which may be in the specific form:
[ you need to do so as much as you can, you can use the following tools:
{tools}
The following format was used:
problems: questions you need to answer;
thinking: you need to think what should be done next;
the operation is as follows: the tool may use [ { tool_names } ] for the next operation, if the operation inputs missing information, the operation is to ask the user to provide the information;
operation input: input parameters of the operation to be performed;
the discovery is as follows: an operation result;
thinking: i now know the final answer to the original question and get the reasoning result.
The following gives you a few examples of the reasoning process, for you to reference:
example 1;
example 2;
example 3.
Start-!
Problem { input }
{agent_scratchpad}】
The { tools } structure in the above template is json format, and includes three fields of name (tool name), function (specific API/function), description (description of tool specific action), where { tool name } is the set of names of all available tools, { input } is the problem of user input, and { agent_scratch } is the word that LLM needs to infer. One specific tool example is as follows, the content of "operation input" in the sample is the parameters required by "search. Run":
Tool (
name= "search",
function=search.run,
description= "you can use it when answering questions about facts"
)
S208: and constructing second-type indication information based on the preset personality setting of the dialogue system, the user expression text and the reasoning result.
In some embodiments, a second type of indication information may be constructed for characterizing personality settings of the dialog system, the user expressed text, the reasoning results, several personalized answer reference examples; wherein each personalized answer example comprises: the user expresses text examples, reasoning result examples and corresponding personified answer examples.
S210: and inputting the second type of indication information into LLM, outputting a personified answer responding to the text expressed by the user, and returning the personified answer to the user.
Examples of several second type indication information are given below.
If the reasoning result obtained in step S206 originates from the task-type dialog API, the second type of indication information may be "personality setting + personified answer reference example + user expression text + reasoning result", for example:
(personality setting) you are a weather assistant.
(personified answer reference example) please answer in a manner consistent with communication with the elderly person according to the following information. Problems of the elderly: "how weather today" corresponding example of reasoning results: "today cloudy turning rain, maximum temperature 20 °, minimum temperature 10 °", corresponding personified answer example: "today there is a couple, 10 to 20 °, you care to keep warm and bring an umbrella".
(user expression text+reasoning result) please answer in a way consistent with the communication with the elderly according to the following information. Problems of the elderly: "how weather today" the reasoning results are: "today's cloudy turning sunny, highest temperature 25 °, lowest temperature 20 °", please supplement the corresponding personified answer: "_____" (content requiring LLM output). ' s of
In addition, if the reasoning result obtained in step S206 originates from the dialogue of the knowledge graph, the second type of indication information may be "personality setting+anthropomorphic answer reference example+user expression text+reasoning result", and it should be noted that, for the case of failure in retrieval, considering that professional domain knowledge needs to avoid risk, the user is informed by a fixed answer that the question cannot be answered temporarily. For example:
(personality setting) you are a medical staff.
(personified answer reference example) please answer in a manner consistent with communication with the elderly person according to the following information. Problems of the elderly: "what hypertension can eat", examples of corresponding reasoning results: the food suitable for hypertension comprises the following components: pumpkin seed kernel, chicken, pine nut kernel, sesame ", corresponding anthropomorphic answer example: "for hypertensive patients, a reasonable diet is very important. Pumpkin seed, chicken, pine nut and sesame are all foods which are suitable for patients with hypertension. These foods are rich in nutrients such as protein, fiber, potassium, etc., and help to control blood pressure. The proper amount of the foods are recommended to be added in the diet, and reasonable diet regulation is performed by combining the instruction of doctors so as to achieve better health effect. "
(user expression text+reasoning result) please answer in a way consistent with the communication with the elderly according to the following information. Problems of the elderly: "what the hypoglycemia can take", the reasoning result is: "hypoglycemic diet includes: cashew, pumpkin seed kernel, duck liver, lotus seed ", please supplement the corresponding anthropomorphic answer: "_____" (content requiring LLM output). ' s of
In addition, if the reasoning result obtained in step S206 originates from the dialogue of the knowledge base, the second type of indication information may be "personality setting+personified answer reference example+user expression text+reasoning result", which is not illustrated. It should be noted that, the knowledge base search result (reasoning result) is an answer that the semantic similarity between the input question and the question in the knowledge base exceeds a threshold value and is ranked in the top three digits, and for the case of search failure, the risk is needed to be avoided by considering the professional domain knowledge, and the user can be told that the question cannot be answered temporarily by using a fixed answer.
Furthermore, if the dialog system needs to be boring with the user, the second type of indication information may be "personality setting+user expression text", for example:
personality settings) you are a home care chaperone.
(user expressed text) please answer the question of the elderly with confidence: "do you accompany me chat".
Please supplement the corresponding personified answer: "_____" (content requiring LLM output). ' s of
The present disclosure also provides a dialogue apparatus applied to a dialogue system, the apparatus comprising:
the determining module is used for determining a preset personification reasoning paradigm, wherein the personification reasoning paradigm comprises at least one round of personification reasoning process, and each round of personification reasoning process comprises the following steps: a thinking step, an operating step and a finding step; the step of thinking is to think with the aim of responding to the text to be considered, judge whether the information inquiry operation is needed to advance thinking, if the information inquiry operation is needed, determine the operation content of the information inquiry operation needed to be performed, and if the information inquiry operation is not needed, take the text to be considered as an reasoning result; the operation step is to execute the operation content of the information query operation determined in the thinking step by calling one or more information query interface APIs; the finding step is to determine the information inquiry result determined in the operation step, and re-use the determined information inquiry result as the text to be considered;
The first processing module constructs first type indication information based on the personification reasoning normal form, and inputs the first type indication information into a large language model LLM to indicate the LLM to interact with the dialogue system according to the personification reasoning normal form;
the acquisition module acquires a user expression text when a user performs a dialogue with the dialogue system;
the interaction module takes the text of the user expression content as a text to be considered, inputs the text to be considered into the LLM, so that the LLM outputs an inference result through at least one round of personification inference process;
the second processing module is used for constructing second-type indication information based on preset personality settings of the dialogue system, the user expression text and the reasoning result;
and the answer module inputs the second type of indication information into the LLM, outputs a personified answer responding to the user expression text, and returns the personified answer to the user.
The present disclosure also provides a computer readable storage medium, as shown in fig. 3, having stored thereon a computer program 140, which when executed by a processor, implements a method of an embodiment of the present disclosure.
The present disclosure also provides a computing device comprising a memory, a processor; the memory is used to store computer instructions executable on a processor for implementing the methods of the embodiments of the present disclosure when the computer instructions are executed.
Fig. 4 is a schematic structural diagram of a computing device provided by the present disclosure, the computing device 15 may include, but is not limited to: processor 151, memory 152, a bus 153 that connects the various system components, including memory 152 and processor 151.
Wherein the memory 152 stores computer instructions executable by the processor 151 such that the processor 151 is capable of performing the methods of any of the embodiments of the present disclosure. The memory 152 may include random access memory unit RAM1521, cache memory unit 1522, and/or read only memory unit ROM1523. The memory 152 may also include: a program tool 1525 having a set of program modules 1524, the program modules 1524 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, one or more combinations of which may include an implementation of a network environment.
The bus 153 may include, for example, a data bus, an address bus, a control bus, and the like. The computing device 15 may also communicate with external devices 155 via the I/O interface 154, such as a keyboard, bluetooth device, etc., the external devices 155 may be, for example. The computing device 150 may also communicate with one or more networks, such as local area networks, wide area networks, public networks, etc., through a network adapter 156. As shown, the network adapter 156 may also communicate with other modules of the computing device 15 over the bus 153.
Furthermore, although the operations of the methods of the present disclosure are depicted in the drawings in a particular order, this is not required to or suggested that these operations must be performed in this particular order or that all of the illustrated operations must be performed in order to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step to perform, and/or one step decomposed into multiple steps to perform.
While the spirit and principles of the present disclosure have been described with reference to several particular embodiments, it is to be understood that this disclosure is not limited to the particular embodiments disclosed nor does it imply that features in these aspects are not to be combined to benefit from this division, which is done for convenience of description only. The disclosure is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
The system, apparatus, module or unit set forth in the above embodiments may be implemented in particular by a computer chip or entity, or by a product having a certain function. One typical implementation is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being functionally divided into various units, respectively. Of course, the functions of each element may be implemented in one or more software and/or hardware elements when implemented in the present specification.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks. In a typical configuration, a computer includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, read only compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic disk storage, quantum memory, graphene-based storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by the computing device. Computer-readable media, as defined herein, does not include transitory computer-readable media (transmission media), such as modulated data signals and carrier waves.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article or apparatus that comprises the element.
The foregoing describes several embodiments of the present disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims can be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing are also possible or may be advantageous.
The terminology used in the various embodiments of the description is for the purpose of describing particular embodiments only and is not intended to be limiting of the various embodiments of the description. As used in this specification, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any or all possible combinations of one or more of the associated listed items.
It should be understood that although the terms first, second, third, etc. may be used in various embodiments of the present description to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, the first information may also be referred to as second information, and similarly, the second information may also be referred to as first information, without departing from the scope of the various embodiments of the present description. The word "if" as used herein may be interpreted as "at … …" or "at … …" or "responsive to a determination", depending on the context.
In this specification, each embodiment is described in a progressive manner, and identical and similar parts of each embodiment are all referred to each other, and each embodiment mainly describes differences from other embodiments. In particular, for the method embodiments, since they are substantially similar to the method embodiments, the description is relatively simple, and reference is made to the description of the method embodiments for relevant points. The above-described method embodiments are merely illustrative, in that the modules illustrated as separate components may or may not be physically separate, and the functions of the modules may be implemented in the same piece or pieces of software and/or hardware when implementing the embodiments of the present disclosure. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
The foregoing description of the preferred embodiments is merely illustrative of the present invention and is not intended to limit the embodiments of the present invention, and any modifications, equivalents, improvements, etc. made within the spirit and principles of the embodiments of the present invention are intended to be included within the scope of the embodiments of the present invention.
Claims (13)
1. A dialog method, applied to a dialog system, the method comprising:
determining a preset personification reasoning paradigm, wherein the personification reasoning paradigm comprises at least one round of personification reasoning process, and each round of personification reasoning process comprises: a thinking step, an operating step and a finding step; the thinking step is to think with the aim of responding to the text to be considered, judge whether to need to carry out information inquiry operation to advance thinking, if the information inquiry operation is determined to be needed, determine the operation content of the information inquiry operation to be carried out, and if the information inquiry operation is determined not to be needed, take the text to be considered as an reasoning result; the operation step is to execute the operation content of the information query operation determined in the thinking step by calling one or more information query interface APIs; the finding step is to determine the information inquiry result determined in the operation step, and re-use the determined information inquiry result as the text to be considered;
Constructing first-class indication information based on the personification reasoning normal form, and inputting the first-class indication information into a large language model LLM to indicate the LLM to interact with the dialogue system according to the personification reasoning normal form;
acquiring a user expression text when a user performs a dialogue with the dialogue system;
taking the text of the user expression content as a text to be considered, inputting the text to be considered into the LLM, so that the LLM outputs an reasoning result through at least one round of anthropomorphic reasoning process;
constructing second-type indication information based on preset personality settings of the dialogue system, the user expression text and the reasoning result;
inputting the second type of indication information into LLM, outputting a personified answer responding to the user expression text, and returning the personified answer to the user.
2. The method of claim 1, constructing a first type of indication information based on the personified inference paradigm, comprising:
constructing a first type of indication information for representing the personification reasoning paradigm, an API set which can be called by the operation steps and a plurality of personification reasoning process reference examples;
inputting the first type of indication information into a large language model LLM to indicate the LLM to interact with the dialogue system according to the personified reasoning paradigm, wherein the method comprises the following steps:
Inputting the first type indication information into a large language model LLM to indicate the LLM to refer to the plurality of anthropomorphic reasoning process reference examples according to the anthropomorphic reasoning paradigm, and calling one or more APIs in the API set to interact with the dialogue system.
3. The method of claim 2, the API set comprising at least one of:
weather platform API, medicine inquiry API, one-key driving API, browser search API, database API, math calculation API, insurance inquiry API and insurance recommendation API.
4. The method of claim 2, the API set comprising at least one of:
a task dialogue API, a knowledge graph dialogue API, a knowledge base dialogue API, and a chatting API.
5. The method of claim 1, wherein the user expressed content text is used as the text to be considered, the text to be considered is input into the LLM, so that the LLM outputs the reasoning result through at least one round of personified reasoning process, and the method comprises the following steps:
creating an empty personification reasoning process record, taking the text of the user expression content as a text to be considered, filling the personification reasoning process record, inputting the personification reasoning process record into an LLM, and iteratively executing the following steps:
Acquiring interactive feedback of LLM output;
if the interactive feedback is an reasoning result, ending iteration;
if the interactive feedback is the operation content of the information inquiry operation to be performed, filling the operation content into the latest thinking step in the reasoning process record, and then inputting the reasoning process record into the LLM;
if the interactive feedback is an operation process of executing operation content of the information query operation, filling the operation process into the latest operation step in the reasoning process record, and then inputting the reasoning process record into the LLM;
and if the interactive feedback is the redetermined text to be considered, filling the text to be considered into the latest discovery step in the reasoning process record, and then inputting the reasoning process record into the LLM.
6. The method of claim 5, wherein the step of iteratively performing further comprises:
and if the interactive feedback is lack of reference information on which the operation content for executing the information query operation depends, acquiring the reference information, filling the reference information into operation input associated with the latest operation step in the reasoning process record, and inputting the reasoning process record into the LLM.
7. The method of claim 6, obtaining the reference information, comprising:
returning an inquiry sentence for inquiring the reference information to the user;
and receiving the reference information provided by the user.
8. The method of claim 1, constructing a second type of indication information based on preset personality settings of the dialog system, the user expression text, and the inference result, comprising:
constructing a second type of indication information used for representing personality settings of the dialogue system, the user expression text, the reasoning results and a plurality of personified answer reference examples;
wherein each personalized answer example comprises: the user expresses text examples, reasoning result examples and corresponding personified answer examples.
9. The method of claim 1, wherein the LLM comprises: GPT model.
10. The method of claim 1, further comprising:
a corpus under a specified business scene is used in advance to construct a plurality of anthropomorphic reasoning process samples;
and performing fine-tuning on the LLM based on the plurality of anthropomorphic reasoning process samples.
11. A dialog device for use in a dialog system, the device comprising:
the determining module is used for determining a preset personification reasoning paradigm, wherein the personification reasoning paradigm comprises at least one round of personification reasoning process, and each round of personification reasoning process comprises the following steps: a thinking step, an operating step and a finding step; the thinking step is to think with the aim of responding to the text to be considered, judge whether to need to carry out information inquiry operation to advance thinking, if the information inquiry operation is determined to be needed, determine the operation content of the information inquiry operation to be carried out, and if the information inquiry operation is determined not to be needed, take the text to be considered as an reasoning result; the operation step is to execute the operation content of the information query operation determined in the thinking step by calling one or more information query interface APIs; the finding step is to determine the information inquiry result determined in the operation step, and re-use the determined information inquiry result as the text to be considered;
The first processing module constructs first type indication information based on the personification reasoning normal form, and inputs the first type indication information into a large language model LLM to indicate the LLM to interact with the dialogue system according to the personification reasoning normal form;
the acquisition module acquires a user expression text when a user performs a dialogue with the dialogue system;
the interaction module takes the text of the user expression content as a text to be considered, inputs the text to be considered into the LLM, so that the LLM outputs an inference result through at least one round of personification inference process;
the second processing module is used for constructing second-type indication information based on preset personality settings of the dialogue system, the user expression text and the reasoning result;
and the answer module inputs the second type of indication information into the LLM, outputs a personified answer responding to the user expression text, and returns the personified answer to the user.
12. A computing device comprising a memory, a processor; the memory is for storing computer instructions executable on a processor for implementing the method of any one of claims 1 to 10 when the computer instructions are executed.
13. A computer readable storage medium having stored thereon a computer program which when executed by a processor implements the method of any of claims 1 to 10.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310830861.1A CN116561286B (en) | 2023-07-06 | 2023-07-06 | Dialogue method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310830861.1A CN116561286B (en) | 2023-07-06 | 2023-07-06 | Dialogue method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN116561286A true CN116561286A (en) | 2023-08-08 |
CN116561286B CN116561286B (en) | 2023-10-27 |
Family
ID=87490150
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310830861.1A Active CN116561286B (en) | 2023-07-06 | 2023-07-06 | Dialogue method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116561286B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118608291A (en) * | 2024-07-25 | 2024-09-06 | 苏州元脑智能科技有限公司 | Computing power transaction service system, method, platform, electronic equipment and medium |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5241621A (en) * | 1991-06-26 | 1993-08-31 | Digital Equipment Corporation | Management issue recognition and resolution knowledge processor |
US7869998B1 (en) * | 2002-04-23 | 2011-01-11 | At&T Intellectual Property Ii, L.P. | Voice-enabled dialog system |
CN106469212A (en) * | 2016-09-05 | 2017-03-01 | 北京百度网讯科技有限公司 | Man-machine interaction method based on artificial intelligence and device |
CN111597314A (en) * | 2020-04-20 | 2020-08-28 | 科大讯飞股份有限公司 | Reasoning question-answering method, device and equipment |
CN114860869A (en) * | 2022-03-30 | 2022-08-05 | 北京邮电大学 | Controllable universal dialogue model with generalized intentions |
CN115221271A (en) * | 2022-06-10 | 2022-10-21 | 网易(杭州)网络有限公司 | Dialog reply method and device, and language model training method and device |
CN115455985A (en) * | 2022-09-19 | 2022-12-09 | 苏州慧君陶智能科技有限公司 | Natural language system processing method based on machine reading understanding |
KR102506404B1 (en) * | 2022-06-10 | 2023-03-07 | 큐에라소프트(주) | Decision-making simulation apparatus and method using pre-trained language model |
WO2023038654A1 (en) * | 2021-09-07 | 2023-03-16 | Google Llc | Using large language model(s) in generating automated assistant response(s) |
CN115905852A (en) * | 2022-07-12 | 2023-04-04 | 南京航空航天大学 | Story generation method, system, storage medium and terminal based on pre-training prompt |
CN116129868A (en) * | 2022-12-29 | 2023-05-16 | 上海阅文信息技术有限公司 | Method and system for generating structured photo |
CN116303962A (en) * | 2023-03-21 | 2023-06-23 | 北京百度网讯科技有限公司 | Dialogue generation method, training method, device and equipment for deep learning model |
-
2023
- 2023-07-06 CN CN202310830861.1A patent/CN116561286B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5241621A (en) * | 1991-06-26 | 1993-08-31 | Digital Equipment Corporation | Management issue recognition and resolution knowledge processor |
US7869998B1 (en) * | 2002-04-23 | 2011-01-11 | At&T Intellectual Property Ii, L.P. | Voice-enabled dialog system |
CN106469212A (en) * | 2016-09-05 | 2017-03-01 | 北京百度网讯科技有限公司 | Man-machine interaction method based on artificial intelligence and device |
CN111597314A (en) * | 2020-04-20 | 2020-08-28 | 科大讯飞股份有限公司 | Reasoning question-answering method, device and equipment |
WO2023038654A1 (en) * | 2021-09-07 | 2023-03-16 | Google Llc | Using large language model(s) in generating automated assistant response(s) |
CN114860869A (en) * | 2022-03-30 | 2022-08-05 | 北京邮电大学 | Controllable universal dialogue model with generalized intentions |
CN115221271A (en) * | 2022-06-10 | 2022-10-21 | 网易(杭州)网络有限公司 | Dialog reply method and device, and language model training method and device |
KR102506404B1 (en) * | 2022-06-10 | 2023-03-07 | 큐에라소프트(주) | Decision-making simulation apparatus and method using pre-trained language model |
CN115905852A (en) * | 2022-07-12 | 2023-04-04 | 南京航空航天大学 | Story generation method, system, storage medium and terminal based on pre-training prompt |
CN115455985A (en) * | 2022-09-19 | 2022-12-09 | 苏州慧君陶智能科技有限公司 | Natural language system processing method based on machine reading understanding |
CN116129868A (en) * | 2022-12-29 | 2023-05-16 | 上海阅文信息技术有限公司 | Method and system for generating structured photo |
CN116303962A (en) * | 2023-03-21 | 2023-06-23 | 北京百度网讯科技有限公司 | Dialogue generation method, training method, device and equipment for deep learning model |
Non-Patent Citations (2)
Title |
---|
MUHAMMAD TALLAL SAEED等: ""Comprehensive Bond Graph Modeling and Optimal Control of an Anthropomorphic Mechatronic Prosthetic Hand"", 《2019 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (ICMA)》, pages 1 - 4 * |
时健: ""基于相似性常规关系的拟人化和拟物化语言分析模型研究"", 《陕西学前师范学院学报》, pages 75 - 78 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118608291A (en) * | 2024-07-25 | 2024-09-06 | 苏州元脑智能科技有限公司 | Computing power transaction service system, method, platform, electronic equipment and medium |
Also Published As
Publication number | Publication date |
---|---|
CN116561286B (en) | 2023-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109616108B (en) | Multi-turn dialogue interaction processing method and device, electronic equipment and storage medium | |
US11386271B2 (en) | Mathematical processing method, apparatus and device for text problem, and storage medium | |
US11145291B2 (en) | Training natural language system with generated dialogues | |
WO2021108679A1 (en) | Contextual and intent based natural language processing system and method | |
WO2023065211A1 (en) | Information acquisition method and apparatus | |
CN111708869B (en) | Processing method and device for man-machine conversation | |
CN112084789B (en) | Text processing method, device, equipment and storage medium | |
CN116932708A (en) | Open domain natural language reasoning question-answering system and method driven by large language model | |
EP2757510A1 (en) | Method and system for linking data sources for processing composite concepts | |
CN105094315A (en) | Method and apparatus for smart man-machine chat based on artificial intelligence | |
CN116881428B (en) | Language model training method and device | |
CN116561286B (en) | Dialogue method and device | |
CN113569017B (en) | Model processing method and device, electronic equipment and storage medium | |
US20230274095A1 (en) | Autonomous conversational ai system without any configuration by a human | |
US11501086B2 (en) | Systems and methods for zero-shot, fast-generation and implementation of an intelligent virtual dialogue agent using one or more pre-trained machine learning-based language models and a response corpus | |
CN112650842A (en) | Human-computer interaction based customer service robot intention recognition method and related equipment | |
CN110931002B (en) | Man-machine interaction method, device, computer equipment and storage medium | |
CN117421398A (en) | Man-machine interaction method, device, equipment and storage medium | |
Chu | Recipe bot: The application of conversational ai in home cooking assistant | |
Aattouri et al. | Modeling of an artificial intelligence based enterprise callbot with natural language processing and machine learning algorithms | |
CN114925206A (en) | Artificial intelligence body, voice information recognition method, storage medium and program product | |
CN117648422A (en) | Question-answer prompt system, question-answer prompt, library construction and model training method and device | |
Cafaro et al. | Selecting and expressing communicative functions in a SAIBA-compliant agent framework | |
CN116910201A (en) | Dialogue data generation method and related equipment thereof | |
CN109002498B (en) | Man-machine conversation method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |